The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Today mixed-criticality systems are used in most industrial domains, because of their integration advantages. They are smaller, weigh less and reduce the idle time of previously dedicated hardware. However, these systems can still be improved. Since their hardware is now used more efficiently it automatically suffers more under the aging effects of heat created by all the simultaneous computations...
The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute its code only after all the other nodes have executed the previous step. However, if the code is parallelized by partitioning the space of the automata, these synchronization requirements can be...
The use of High Performance Computing (HPC) technologies is gaining interest in the field of neuronal activity simulations. In fact, scientists' main goal is to understand and reproduce cells behavior in a realistic way. This will allow undertaking in silico experiments, instead of in vivo ones, to test new medicines, to study cerebral pathologies and to discover innovative therapies. To this aim,...
Wireless body area network (WBAN) is an emerging technology which stands to a base for various implantable and wearable sensors. This paper presents an integrated resource (bandwidth) allocation method to share the limited bandwidth among multiple WBANs. On the basis of the coexistent WBAN model, the traffic source of each WBAN is parameterized using the twin token bucket model. We design a scheduling...
This paper presents an efficient parallelization of the Motion Estimation procedure, one of the core parts of Super Resolution techniques. The algorithm considered is the basic version of Block Matching Super Resolution, with a single low-resolution camera and fixed Macro Block dimensions. Two are the implementations provided, with OpenMP and in CUDA on an NVIDIA Kepler GPU. Tests have been conducted...
Electronic identity (eID) is on everyone's lips as is increasingly used in various services nowadays. In Europe, the EU (European Union) Regulation N.910/2014 on electronic identification and trust services for electronic transactions in the internal market (eIDAS Regulation) has also created a legal structure for electronic identification, signatures, seals and documents throughout EU, so an intensive...
Power-aware computing is gaining an increasing attention both in academic and industrial settings. The problem of guaranteeing a given QoS requirement (either in terms of performance or power consumption) can be faced by selecting and dynamically adapting the amount of physical and logical resources used by the application. In this study, we considered standard multicore platforms by taking as a reference...
Detecting data races among the threads of a concurrent program is one of the most important debugging issues. However, the data races are never be easily detected due to the inherent concurrency and indeterministic execution of the participating threads. The widely employed dynamic data race detecting methods generally scrutinize one of sampled execution paths of the program, and may thus miss some...
This paper describes demonstrative software simulator "E14", helpful in studying on an ordinary PC essentials of parallel calculations. It contains five virtual processors with identical instruction set, one of which controls the other four. Simulator has several mechanisms of data exchange between processors, so it can be used for studying both architectures with shared and distributed...
As contemporary distributed systems reach their scalability limits, their architects search ways to push their boundaries further. One of the approaches is to convert a given distributed system into a peer-to-peer (P2P) structure. This approach causes more overhead compared to a single-master or multi-master architecture, but on larger scales (if properly designed) does not reach any boundaries. The...
Multi/Many-core architectures will be the popular platform for future system design. Recent investigations show that the hybrid optical-electrical interconnection network can be an appropriate alternative to the traditional electrical NoC. Undoubtedly, memory wall is one of the most important challenges of multi/many-core systems which can somehow be alleviated thanks to hierarchical memory structure...
Hybrid Wireless Network-on-Chip (HWNoC) provides high bandwidth, low latency and flexible topology configurations, making this emerging technology a scalable communication fabric for future Many-Core System-on-Chips (MCSoCs). On the other hand, dark silicon is dominating the chip footage of upcoming MCSoCs since Dennard scaling fails due to the voltage scaling problem that results in higher power...
This paper proposes a latency-aware task mapping algorithm called 3D-AMAP for 3D mesh-based NoCs with partially-filled TSVs. The 3D-AMAP algorithm divides communications of a given application graph into Low-volume (LV) and High-Volume (HV) communications. The 3D-AMAP algorithm bypasses the LV communications to partition the given application graph to some subgraphs. Then, 3D-AMAP algorithm fairly...
Network-on-Chip (NoC) is a communication subsystem which has been widely utilized in many-core processors and system-on-chips in general. In order to execute time-critical applications on a NoC-based platform, the timing behavior of the network needs to be predicted during system design. One of the most important timing requirements is regarding schedulability, which refers to determining if a real-time...
The small feature sizes in current Networks-on-chip (NoCs) have increased the importance of reliability. However, existing fault tolerance schemes incur costs in terms of performance and power consumption which can be over-burdening. In order to tackle the reliability problem in NoCs while minimizing the performance and energy costs, a compiler-enhanced reliability scheme is introduced in this paper...
Virtualization is a key enabling technology in Cloud computing that allows users to run multiple virtual machines (VMs) with their own application environment on top of physical hardware. It permits scaling up and down of applications by elastic on-demand provisioning of VMs in response to their variable load to achieve increased utilization efficiency at a lower operational cost, while guaranteeing...
There is an opportunity for Distributed Computing Infrastructures (DCIs) to embrace container-based virtualisation to support efficient execution of scientific applications without the performance penalty commonly introduced by Virtual Machines (VMs). However, containers (e.g. Docker) and VMs feature different image formats and disparate procedures for deployment and management, thus hindering the...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.