The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
For commercial software in scientific and engineering computing, software licenses are needed when running them in high performance computing systems. Usually, there is a constraint for the number of software licenses. With the traditional software licenses management approaches, there is a prominent issue. The jobs will fail immediately without available software licenses. However, the existing job...
Job scheduling is a necessary prerequisite for performance optimization and resource management in the cloud computing system. Focusing on accurate scaled cloud computing environment and efficient job scheduling under Virtual Machine (VM) resource and Server Level Agreement (SLA) constraints, we introduce the architecture of cloud computing platform and optimization job scheduling scheme in this study...
Scientific applications are very complex and need massive computing power and storage space. Distributed computing environment has become a new technology to execute large-scale applications and Cloud computing is one of these technologies. Resource allocation is one of the most important challenges in the Cloud Computing. The optimally assigning of the available resources to the needed cloud applications...
Distributed computing environment has become a new technology to execute large-scale applications and Cloud computing is one of these technologies. Resource allocation is one of the most important challenges in the Cloud Computing. The optimally assigning of the available resources to the needed cloud applications is known to be a NP complete problem. In this paper, we propose a new task scheduling...
Grid computing is considered as the upcoming phase of distributed computing. Grid focuses on maximizing the resource utilization of an organization by sharing them across application. Ingrid computing, job scheduling is an important task. Load balancing and resource allocation are vital issues that must be considered in a grid computing environment. Load balancing is the technique which distributes...
A Dominant Resource Fairness (DRF) based scheme for job scheduling in distributed cloud computing systems which was modeled as multi-job scheduling and multi-resource allocation coupling problem is proposed, where the resource pool is constructed from a large number of distributed heterogeneous servers, representing different points in the configuration space of resources such as processing, memory,...
Server farms are playing an important role in the Internet infrastructure today. However, the increasing power consumption of server farms makes them expensive to operate. Thus, how to reduce the power consumed by server farms has become a important research topic. Power can be thought as a resource of system, just like traditional resources, and we can manage power via improved resource management...
In this paper, we propose a new algorithm for job interchange in Computational Grids that consist of autonomous and equitable HPC sites, called Shaking-G. Originally developed for balancing the sharing of video files in P2P networks, we conceptually transfer and adapt the algorithm to the domain of job scheduling in Grids, building an integrated, load-adaptive two-tier job exchange strategy. We evaluate...
Checkpoint/restart has been widely used in computing systems for fault tolerance, job scheduling and system maintenance purposes. However, the lack of transparency has hindered adoptions of many implementations of it. In this paper, we present a fully transparent parallel checkpoint/restart framework, DCR, which takes the advantages of kernel-level checkpointing method and TCP session preservation...
When human culture advances, current problems in science and engineering become more complicated and need more computing power to tackle and analyze. A supercomputer is not the only choice for complex problems any more as a result of the speed-up of personal computers and networks. Grid technology, which connects a number of personal computers with high speed networks, can achieve the same computing...
In this paper, a P2P volunteer computing system, PPVC, is presented. Volunteers are organized as a P2P network, i.e. there is no central server and every volunteer has the same function. It uses a decentralized job scheduling method so that each volunteer only need to communicate with its direct neighbors but an application is able to be distributed to all the volunteers. Using this job scheduling...
The adoption of Web service standards provides us with an increased level of manageability, extensibility and interoperability between loosely coupled services.The adoption of Web services technologies atop sites for performance monitoring and scheduling will improve the efficient use of the computational resources. Web services provide the ability to decompose HPC resources and functionality into...
Cluster management software has faced more increased scalability challenge with ever enlarged cluster scale. Its good scalability rests with feasible design techniques focusing on hybrid software topologies with partitioning policy, non-blocking I/O multiplexing and message on demand. Design patterns are generic solutions to recurring software design problems, and above three important techniques...
Our former study had investigated the modeling and performance evaluation of QoS-aware job scheduling on computational grids using the stochastic high-level Petri net (SHLPN). This paper proposes an approximate performance analysis technique, which is based on the decomposition and refinement of the SHLPN model as well as iteration among submodels, to reduce the complexity of the model and cope with...
Data intensive applications, such as high energy physics, usually have a large amount of input data requires analysis. These data are often shared and replicated across the data grid. As the computing power increases, the delay caused by "waiting for input data" will become more pronounced. In this paper, we study the impact of parallel download on job scheduler performance in data grid...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.