The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A geographically isolated disaster-tolerant system for Distributed File System (DFS) is essential for its enhancement in data security. However, the plain disaster tolerant model performs poor quality while a switch happens in hot-backup. In this study, we proposed a Direct Data Fetch (DDF) technology for the hybrid model. The DDF can fetch data directly from the data chunk server while the main server...
Nowadays, an individual uses multiple ICT devices such as PCs, laptops, smart phones and others. And the content files are not dedicated to a specific device, but shared by the devices. One of the sharing services is the personal cloud computing. Users can backup, synchronize, share and manage their files with it. But most cloud systems have their own dedicated interfaces and it is not easy to use...
Highly available metadata services of distributed file systems are essential to cloud applications. However, existing highly available metadata designs lack client-oriented features that treat metadata discriminately, leading to a single metadata fault domain and low availability. After investigating the workload characteristics of Hadoop, we propose Client-Oriented METadata (COMET), a novel highly...
This paper presents an adaptive replica synchronization mechanism among storage servers (SSs) without the interference from the metadata server (MDS) in a distributed file system. This mechanism employs a chunk list data structure, which holds the information about the relevant chunk replicas and is stored on the associated SSs corresponding to the replicas. Combined with version-based update replay...
As mobile devices and wireless network access are becoming more pervasive, people are more likely to access and share files any time and any place. A variety of distributed file systems have been developed to support file sharing and distribution. But the problems such as high complexity, low flexibility, low data integrity, management difficulties, low security, and high susceptibility to network...
In this paper, we improve the performance of server-side I/O scheduling on parallel file systems by transparently including information about the applications' access patterns. Server-side I/O scheduling is a valuable tool on multiapplication scenarios, where the applications' spatial locality suffers from interference caused by concurrent accesses to the file system. We present AGIOS, an I/O scheduling...
Hadoop HDFS is an open source project from Apache Software Foundation for scalable, distributed computing and data storage. HDFS has become a critical component in today's cloud computing environment and a wide range of applications built on top of it. However, the initial design of HDFS has introduced a single-point-of-failure, HDFS contains only one active name node, if this name node experiences...
Password-based authentication schemes and their graphical evolutions have been deeply analyzed in the last couple of decades. Typically such schemes are not resilient to shoulder surfing attacks, that is, if the adversary can observe (and “understand”) a number of authentication sessions, he can identify the secret password. In this paper we propose a new paradigm for user authentication. FilmPW is...
File replication and consistency are well known techniques in distributed systems to address the key issues such as scalability, reliability and fault tolerance. For many years, file replication and consistency in distributed environment has been researched to enhance and optimize the availability and reliability of the entire system. An effort has been made in the present work to propose a file popularity...
Supercomputers generate vast amounts of data, typically organized into large directory hierarchies on parallel file systems. While the supercomputing applications are parallel, the tools used to process them requiring complete directory traversais, are typically serial. We present an algorithm framework and three fully distributed algorithms for traversing large parallel file systems, and performing...
Virtualization has significantly improved hardware utilization, thus, allowing IT service providers to offer a wide range of application, platform and infrastructure solutions through low-cost, commoditized hardware. In this paper we focus on one such layer: storage virtualization, which enables a host system to map a guest VM''s file system to almost any storage media. A file system maintains track...
Existing parallel file systems are unable to differentiate I/Os requests from concurrent applications and meet per-application bandwidth requirements. This limitation prevents applications from meeting their desired Quality of Service (QoS) as high-performance computing (HPC) systems continue to scale up. This paper presents vPFS, a new solution to address this challenge through a bandwidth virtualization...
Mobile devices manage limited resources, above all data storage capacity and data transfer rate. Data amount used by nowadays applications is beyond the capability of mobile devices. A distributed file system presents an appropriate solution, but current distributed file systems are not suitable for mobile devices. We have explored the properties of current distributed file systems in relation to...
In this study, a D-CATV Cloud Platform based content service integration system was designed by maximizing the efficiency of the use of server resources using cloud VM services, which are used as a virtual server management system, as much as possible. The proposed study applies the VMI (Virtual Machine Image) management technology employed in cloud VM services for reducing service recovery time as...
A file system snapshot is a stable image of all files and directories in a well-defined state. Local file systems offer point-in-time consistency of snapshots, which guarantees that all files are frozen in a state in which they were at the same point in time. However, this cannot be achieved in a distributed file system without global clocks or synchronous snapshot operations. We present an algorithm...
Some new paradigms of large-scale distributed computing such as cluster, grid, and cloud computing have been recently developed to effectively support exponentially growing amount of data. Here numerous users store their data in the distributed storage that are accessed remotely anytime and anywhere. Therefore, an appropriate concurrency control such as locking is needed so that multiple users can...
In the conventional CATV, there is a limitation in introducing personalized broadcasting services and bidirectional broadcasting services due to the lack of technological bases. In addition, the conventional CATV requires the high cost of STB and has been burdened to invest infrastructures due to the weakness in the common use of infrastructures for providing services according to SO. Also, the large...
Virtual clusters are the new ways for managing the computing resources in the cluster environment. Users are presented with virtual clusters instead of physical ones. The storage system for supporting the running clusters is critical in such systems for efficient store of virtual machine images. We have used the CAS (Content Addressable Storage) based storage for manage the large number of virtual...
In a file system, critical sections are represented by means of read or write operations on useful data (i.e files) as well as metadata of the system. The processes must be synchronized to reach these shared resources, thanks to mutual exclusion algorithms which guarantee data consistency. In a grid environment, processes are compared to grid nodes and their synchronization is ensured by the sending...
Conventional data grid systems require the application to access remote data through explicit APIs, consequently sacrificing user transparency. Thus, some systems enable the application to transparently access remote data by copying the entire file to a userpsilas local storage before executing the application, even when only a tiny file fragment is required. Such an approach consumes unnecessary...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.