The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
System design where cyber-physical applications are securely coordinated from the cloud may simplify the development process. However, all private data are then pushed to these remote “swamps,” and human users lose actual control as compared to when the applications are executed directly on their devices. At the same time, computing at the network edge is still lacking support for such straightforward...
The problem of the discovery and marketing of new drugs can be vastly accelerated through High Performance Computing (HPC), molecular modeling techniques, and more specifically by means of the techniques commonly named as computational drug discovery (CDD) and in silico high throughput screening. These techniques usually assume a unique interaction site (active site) between potential drugs and a...
The advances in computational techniques both from a software and hardware viewpoint lead to the development of projects whose complexity could be quite challenging, e.g., biomedical simulations. In order to deal with the increased demand of computational power many collaborative approaches have been proposed in order apply proper partitioning strategy able to assign pieces of execution to a crowd...
This paper introduces a cost effective scheme for a Cloud Radio Access Network (C-RAN) based on virtualization, which is a technology used to reduce Power Consumption (PC) in the Base Band Unit (BBU pool). The BBU's function is proposed to be as a software application running in a servers, which is called the virtual BBU (vBBU). To asses the proposal scheme, a power model of the BBU pool is proposed...
With the growing popularity of Internet of Things (IoT) services being applied in several aspects of real-life applications, performance has become an important requirement. Meanwhile, the techniques for reliability enhancement such as virtual machine migration and recovery also have significant impact on end-to-end performance. This paper proposes a predictive approach of reliability-aware performance...
This paper deals with the reduction of the number of comatose servers. The characteristic of such a server is to consume electricity while not delivering useful information services. According to recent studies, up to 30% of the servers (including those in datacenters) are comatose. The existence of these servers lowers the interest in clouds for green computing. Our paper assumes a cloud provider...
Quality-of-Service attributes such as performance and reliability heavily depend on the run-time conditions under which software is executed (e.g., workload fluctuation and resources availability). Therefore, it is important to design systems able to adapt their setting and behavior due to these run-time variabilities. In this paper we propose a novel approach based on queuing networks as the quantitative...
Security is often treated as secondary or a non- functional feature of software which influences the approach of vendors and developers when describing their products often in terms of what it can do (Use Cases) or offer customers. However, tides are beginning to change as more experienced customers are beginning to demand for more secure and reliable software giving priority to confidentiality, integrity...
Recently, there are significant advances in the areas of networking, caching and computing. Nevertheless, these three important areas have traditionally been addressed separately in the existing research. In this paper, we present a novel framework that integrates networking, caching and computing in a systematic way and enables dynamic orchestration of these three resources to improve the end-to-end...
The growing demand on the performance of transparent computing systems requires good cache schemes in order to overcome the prolonged network latency. However, evaluating cache schemes, especially measuring the performance of a transparent computing system with particular cache scheme remains challenging. This is because neither method is available to evaluate the effectiveness and efficiency of the...
Virtual switches are a key elements within the new paradigms of Software Defined Networking (SDN) and Network Function Virtualization (NFV). Unlike proprietary networking appliances, virtual switches come with a high level of flexibility in the management of their physical resources such as the number of CPU cores, their allocation to the switching function, and the capacities of the RX queues, which...
Pervasive computing is one of the increasing researches nowadays. The main problem to be solved in pervasive computing is service discovery and service providing accurately within a stipulated time interval. This paper proposes a REQ-RES Matching based service discovery method for improve the efficiency of service discovery and service provision. Present and permanent information of services is stored...
Electrophysiological simulations are computationally expensive tasks. These kinds of simulations are usually run on super-computers or clusters, which may be expensive or difficult to access. In this work we present DENIS@Home, a simulation platform that follows the Volunteer Computing paradigm. DENIS@Home is based on BOINC, a volunteer computer environment used worldwide, and on Cell ML, an open...
Cyber-physical systems (CPS) are large scale systems highly integrated with the physical environment. Given the changing nature of physical environments, CPS must be able to adapt on-line to new situations while preserving their correct operation. Correctness by construction relies on using formal tools, which suffer from a considerable computationaloverhead especially if executed on-line. As the...
The paper presents a detailed techno-economic model for LTE networks with inherent coupling of CAPEX and OPEX cost for each real or virtualized network element. The life-cycle phases of a network from the idea to set up a certain product or service, followed by the installation and operation of the network up to the decommissioning of the equipment are taken into account. The work was performed within...
Attack Graphs (AGs) are a well-known formalism and there are tools available for AG generation and security risk analysis. The security posture of a networked system can be evaluated via an AG. However, as the size of the system becomes large, the AG suffers from the state-space explosion problem. Scalable security models have been developed to cope with this issue. Hierarchical Attack Representation...
In many enterprises the number of deployed applications is constantly increasing. Those applications - often several hundreds - form large software landscapes. The comprehension of such landscapes is frequently impeded due to, for instance, architectural erosion, personnel turnover, or changing requirements. Therefore, an efficient and effective way to comprehend such software landscapes is required...
In recent years, IT Service Providers have been rapidly transforming to an automated service delivery model. This is due to advances in technology and driven by the unrelenting market pressure to reduce cost and maintain quality. Tremendous progress has been made to date towards attainment of truly automated service delivery; that is, the ability to deliver the same service automatically using the...
The demand for parallel I/O performance continues to grow. However, modelling and generating parallel I/O work-loads are challenging for several reasons including the large number of processes, I/O request dependencies and workload scalability. In this paper, we propose the PIONEER, a complete solution to Parallel I/O workload characterization and gEnERation. The core of PIONEER is a proposed generic...
Learning from demonstration (LfD) is a common technique applied to many problems in robotics, such as populating grasp databases, training for reinforcement learning of high-level skill sets and bootstrapping motion planners. While such approaches are generally highly valued, they rely on the often time-consuming process of gathering user demonstrations, and hence it becomes difficult to attain a...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.