The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The growing demand for flexibility and cost reduction in the telecommunication landscape directs the focus of service development heavily to programmability and softwarization. In the domain of Network Function Virtualization (NFV), one of the goals is to replace dedicated hardware devices (such as switches, routers, firewalls) with software-based network functionalities, showing comparable performance...
Cloud datacenters must ensure high availability for the hosted applications and failures can be the bane of datacenter operators. Understanding the what, when and why of failures can help tremendously to mitigate their occurrence and impact. Failures can, however, depend on numerous spatial and temporal factors spanning hardware, workloads, support facilities, and even the environment. One has to...
The huge energy consumption in data centers produces not only high electricity bill but also tremendous carbon footprints. Although today's servers and data centers of leading internet companies are more energy efficient than ever before, the fluctuations in external workload and internal resource utilization calls for energy proportional computing. Insight into server energy proportionality can help...
In order to provide the cloud computing research community with a full-system level datacenter server emulator with programmable hardware and software, and stimulate more innovative research work, this poster and demo shows a scientific research platform, Titian2, designed and implemented at ICT of CAS. Titian2 has the ability of on-line profiling and measuring, and the scalability of connecting with...
A new energy-proportional computing model extends Barroso and Hölzle's original definition for fixed-resource systems to aid in the design of more efficient modern systems with reconfigurable resources that can be varied at runtime.
There is a spectrum of solutions are available for distributing content over the Internet today. One of these solutions is Content distribution networks (CDN). CDN need to make decisions, such as server selection and routing, to improve a performance of content distribution. But we must remember, that performance may be limited by various factors such as packet loss in the network, a small receive...
In this paper we introduce a novel, dense, system-on-chip many-core Lenovo NeXtScale System® server based on the Cavium THUNDERX® ARMv8 processor that was designed for performance, energy efficiency and programmability. THUNDERX processor was designed to scale up to 96 cores in a cache coherent, shared memory architecture. Furthermore, this hardware system has a power interface board (PIB) that measures...
Network Functions Virtualization (NFV) aims at replacing proprietary hardware appliances with software running on a standardized, general purpose computing platforms. Recently, the concept has gained traction in the industry and major deployments have been announced. However, benchmarking the performance characteristics of virtualized network functions (VNFs) is still an active research topic. VNFs...
Current cloud users pay for statically configured VM sizes irrespective of usage. It is more favorable for users to consume (and be billed for) just the right amount of resources necessary to satisfy the performance requirement of their applications. We take a novel perspective to enable such resource usage, where we assume that the cloud operator exposes a small, dynamic fraction of its infrastructure,...
Over the past decade, platforms at Los AlamosNational Laboratory (LANL) have experienced large increases in complexity and scale to reach computational targets. The changes to the compute platforms have presented new challenges to the production monitoring systems in which they must not only cope with larger volumes of monitoring data, but also must provide new capabilities for the management, distribution,...
In this paper we consider a root-cause analysis framework for NFV infrastructure. As monitoring machinery for NFV has matured the next step is to leverage on such data to automatically optimize failure detection, analysis, and overall resiliency. The complex architecture and dynamics of NFV poses significant challenges from the point of view of causality inference. In particular, the need for an approach...
Big data revolution has created an unprecedented demand for intelligent data management solutions on a large scale. While data management has traditionally been used as a synonym for relational data processing, in recent years a new group popularly known as NoSQL databases have emerged as a competitive alternative. There is a pressing need to gain greater understanding of the characteristics of modern...
Network function virtualization (NFV) introduces additional complexity to network management, since the placement and behavior of virtualized network functions (VNFs) can be independent from the underlying hardware, and virtualization technology increases the number of monitoring points and the amount of statistical data. In our previous work, we proposed a framework for detecting anomalous behavior...
While workload collocation is a necessity to increase energy efficiency of contemporary multi-core hardware, it also increases the risk of performance anomalies due to workload interference. Pinning certain workloads to a subset of CPUs is a simple approach to increasing workload isolation, but its effect depends on workload type and system architecture. Apart from common sense guidelines, the effect...
Network function virtualization introduces additional complexity for network management through the use of virtualization environments. The amount of managed data and the operational complexity increases, which makes service assurance and failure recovery harder to realize. In response to this challenge, the paper proposes a distributed management function, called virtualized network management function...
Cloud businesses need comprehensive visibility on hardware and software components, their utilization and their configuration. In addition, they need to integrate such information with their asset management systems and publicly available information such as hardware specifications. In this paper, we present an approach for cloud management and monitoring based on a semantic layer that unifies different...
Hardware-assisted security is emerging as a promising avenue for protecting computer systems. Hardware based solutions, such as Physical Unclonable Functions (PUF), enable system authentication by relying on the physical attributes of the silicon to serve as fingerprints. A variety of PUF designs have been proposed by researchers, with some gaining commercial success. Virtually all of these systems...
Total Cost of Ownership (TCO) is a key optimization metric for the design of a datacenter. This paper proposes, for the first time, a framework for modeling the implications of DRAM failures and DRAM error protection techniques on the TCO of a datacenter. The framework captures the effects and interactions of several key parameters including: the choice of DRAM protection technique (e.g. single vs...
Traditional benchmarking of new architectures often involves running a workload on a single system with a single OS. In such a setup, the objective is typically to stress a single resource (e.g., CPU) and produce a single number used to characterize the performance of the system. Newer benchmarks have extended this paradigm by testing the performance of distributed systems like Hadoop clusters or...
We propose a basic metric for design sheets of infrastructure in order to improve infrastructure quality. The metric is called as LOI (Line Of Item). LOI is a simple metric of infrastructure for measuring scale and quality of infrastructure, as same as LOC (Line Of Code) of software. In addition, a class diagram of design sheets is provided in UML software design technique. LOI and the class diagram...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.