Distributed File Systems
460 resources related to Distributed File Systems
- Topics related to Distributed File Systems
- IEEE Organizations related to Distributed File Systems
- Conferences related to Distributed File Systems
- Periodicals related to Distributed File Systems
- Most published Xplore authors for Distributed File Systems
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
The ICASSP meeting is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 50 lecture and poster sessions.
Cluster Computing, Grid Computing, Edge Computing, Cloud Computing, Parallel Computing, Distributed Computing
With technically co-sponsored by IEEE ComSoc(Communications Society), IEEE ComSocCISTC(Communications & Information Security Technical Community), and IEEE ComSocONTC(Optical Networking Technical Community), the ICACT(International Conference onAdvanced Communications Technology) Conference has been providing an open forum forscholars, researchers, and engineers to the extensive exchange of information on newlyemerging technologies, standards, services, and applications in the area of the advancedcommunications technology. The conference official language is English. All the presentedpapers have been published in the Conference Proceedings, and posted on the ICACT Websiteand IEEE Xplore Digital Library since 2004. The honorable ICACT Out-Standing Paper Awardlist has been posted on the IEEE Xplore Digital Library also, and all the Out-Standing papersare subjected to the invited paper of the "ICACT Transactions on the Advanced Communications Technology" Journal issue by GIRI
Computer in Technical Systems, Intelligent Systems, Distributed Computing and VisualizationSystems, Communication Systems, Information Systems Security, Digital Economy, Computersin Education, Microelectronics, Electronic Technology, Education
The theory, design and application of Control Systems. It shall encompass components, and the integration of these components, as are necessary for the construction of such systems. The word `systems' as used herein shall be interpreted to include physical, biological, organizational and other entities and combinations thereof, which can be represented through a mathematical symbolism. The Field of Interest: shall ...
Covers topics in the scope of IEEE Transactions on Communications but in the form of very brief publication (maximum of 6column lengths, including all diagrams and tables.)
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
Design and analysis of algorithms, computer systems, and digital networks; methods for specifying, measuring, and modeling the performance of computers and computer systems; design of computer components, such as arithmetic units, data storage devices, and interface devices; design of reliable and testable digital devices and systems; computer networks and distributed computer systems; new computer organizations and architectures; applications of VLSI ...
After nine years of publication, DS Online will be moving into a new phase as part of Computing Now (http://computingnow.computer.org), a new website providing the front end to all of the Computer Society's magazines. As such, DS Online will no longer be publishing standalone peer-reviewed articles.
2011 Second Eastern European Regional Conference on the Engineering of Computer Based Systems, 2011
Need of storing huge amounts of data has grown over the past years. The data should be stored for future reuse or for sharing among users. Data files can be stored on a local file system or on a distributed file system. A distributed file system provides many advantages such as reliability, scalability, security, etc. This paper shows new trends ...
2013 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum, 2013
One of the bottlenecks of distributed file systems deals with mechanical hard drives (HDD). Although solid-state drives (SSD) have been around since the 1990's, HDDs are still dominant due to large capacity and relatively low cost. Hybrid hard drives with a small built-in SSD cache does not meet the need of a large variety of workloads. This paper proposes a ...
International Symposium on Computer Science and its Applications, 2008
Most of large-scale distributed file systems decouple metadata operation from reading or writing operation for a file. In other words, certain servers named metadata server (MDS) is responsible for maintaining the metadata information of file systems. But, mostly the existing file systems are used are strictive metadata management technique, because those systems mostly designed to focus a distributed management of ...
2012 IEEE International Conference on Cluster Computing, 2012
Distributed file systems (DFS) are key building blocks for cloud computing applications based on the MapReduce programming paradigm. In such file systems, nodes simultaneously serve computing and storage functions, a file is partitioned into a number of chunks allocated in distinct nodes so that MapReduce tasks can be performed in parallel over the nodes. However, in a cloud computing environment, ...
2006 HPCMP Users Group Conference (HPCMP-UGC'06), 2006
We present the development of a tool for analyzing the interactions between parallel/distributed file systems and databases in a Linux cluster environment. We begin with an overview of existing file system and database technology. We then discuss the actual design of the tool. A sample set of results on a small Linux cluster using NFS and Postgres are presented and ...
APEC Speaker Highlights: Ron Van Dell
Yahoo's Raghu Ramakrishnan Discusses CAP and Cloud Data Management
2D Nanodevices - Paul Hurley at INC 2019
Trung Tran: Opening Keynote: WF IoT 2016
The Convergence of OT and IT, the Next Digital Wave - 2018 IEEE Industry Summit on the Future of Computing
Q&A with Alicia Abella, Assistant VP at AT&T
Validating Cyber-Physical Energy Systems, Part 2: IECON 2018
Validating Cyber-Physical Energy Systems, Part 3: IECON 2018
Q&A with Dr. Ling Liu: IEEE Big Data Podcast, Episode 8
Transportation Electrification: Smart Grids and Sustainable Generation
Validating Cyber-Physical Energy Systems, Part 4: IECON 2018
EDOC 2010 - Sylvain Halle - Best Paper Presentation
Validating Cyber-Physical Energy Systems, Part 1: IECON 2018
Lew Tucker, IEEE GLOBECOM'13 Keynote Address - Lew Tucker, CTO, Cisco Systems
The Vienna LTE-A Dowlink Link-Level Simulator
IEEE Themes - Efficient networking services underpin social networks
Handling of a Single Object by Multiple Mobile Robots based on Caster-Like Dynamics
Hyperdimensional Biosignal Processing: A Case Study for EMG-based Hand Gesture Recognition - Abbas Rahimi: 2016 International Conference on Rebooting Computing
IMS MicroApps: AWR's iFilter
Need of storing huge amounts of data has grown over the past years. The data should be stored for future reuse or for sharing among users. Data files can be stored on a local file system or on a distributed file system. A distributed file system provides many advantages such as reliability, scalability, security, etc. This paper shows new trends in these systems with a focus on increasing performance. These include the organization of data and metadata storage, usage of caching, and design of replication algorithms.
One of the bottlenecks of distributed file systems deals with mechanical hard drives (HDD). Although solid-state drives (SSD) have been around since the 1990's, HDDs are still dominant due to large capacity and relatively low cost. Hybrid hard drives with a small built-in SSD cache does not meet the need of a large variety of workloads. This paper proposes a middleware that manages the underlying heterogeneous storage devices in order to allow distributed file systems to leverage the SSD performance while leveraging the capacity of HDD. We design and implement a user-level file system, HyCache, that can offer SSD- like performance at a cost similar to a HDD. We show how HyCache can be used to improve performance in distributed file systems, such as the Hadoop HDFS. Experiments show that HyCache achieves up to 7X higher throughput and 76X higher IOPS than Linux Ext4 file system, and can accelerate HDFS by 28% at 32-node scales (compared to vanilla HDFS).
Most of large-scale distributed file systems decouple metadata operation from reading or writing operation for a file. In other words, certain servers named metadata server (MDS) is responsible for maintaining the metadata information of file systems. But, mostly the existing file systems are used are strictive metadata management technique, because those systems mostly designed to focus a distributed management of data and an input/output performance rather than the metadata. In this paper, we present a new non-shared metadata management technique in order to provide flexible metadata throughput and scalability. First, we introduce a new metadata distribution technique called dictionary partitions (DP). Then, we introduce load distribution technique based DP. In addition, we demonstrate the superiority of our technique with shared metadata management technique.
Distributed file systems (DFS) are key building blocks for cloud computing applications based on the MapReduce programming paradigm. In such file systems, nodes simultaneously serve computing and storage functions, a file is partitioned into a number of chunks allocated in distinct nodes so that MapReduce tasks can be performed in parallel over the nodes. However, in a cloud computing environment, failure is the norm, and nodes may be upgraded, replaced, and added in the system. Files can also be dynamically created, deleted, and appended. This results in load imbalance, that is, the file chunks are not distributed as uniformly as possible in the nodes. Although distributed load balancing algorithms exist in the literature to deal with the load imbalance problem, emerging DFSs in production systems strongly depend on a central node for chunk reallocation. This dependence is clearly inadequate in a large-scale, failure-prone environment because the central load balancer is put under considerable workload that is linearly scaled with the system size, and may thus become the performance bottleneck and the single point of failure. In this paper, we illustrate and define the load rebalancing problem in cloud DFSs. We advocate file systems in clouds shall incorporate decentralized load rebalancing algorithms to eliminate the performance bottleneck and the single point of failure. Simulation results for a potential distributed load balancing algorithm are illustrated. The performance of our proposal implemented in the Hadoop distributed file system is also demonstrated.
We present the development of a tool for analyzing the interactions between parallel/distributed file systems and databases in a Linux cluster environment. We begin with an overview of existing file system and database technology. We then discuss the actual design of the tool. A sample set of results on a small Linux cluster using NFS and Postgres are presented and analyzed by way of a demonstration and discussion of how this utility may be used for performance assessment
A large class of modern distributed file systems treat metadata services as an independent system component, separately from data servers. The availability of the metadata service is key to the availability of the overall system. Given the high rates of failures observed in large-scale data centers, distributed file systems usually incorporate high-availability (HA) features. A typical approach in the development of distributed file systems is to design and develop metadata services from the ground up, at significant cost in terms of complexity and time, often leading to functional shortcomings. Our motivation in this paper was to improve on this state of things by defining a general-purpose architecture for HA metadata services (which we call RMS) that can be easily incorporated and reused in new or existing file systems, reducing development time. Taking two prominent distributed file systems as case studies, PVFS and HDFS, we developed RMS variants that improve on functional shortcomings of the original HA solutions, while being easy to build and test. Our extensive evaluation of the RMS variant of HDFS shows that it does not incur an overall performance or availability penalty compared to the original implementation.
With the development of cloud computing, the distributed file systems are getting more and more attention. In this paper, we introduce the concept and the features of the distributed file system, and analyse several popular systems, such as Hadoop, Moosefs and Lustre. Especially, the tested performance comparison of these systems is given. Finally, we propose the research trends in the future.
Previous distributed file systems have relied either on convention or on obtaining dynamic global agreement to provide network transparent file naming. The authors argue that neither approach can succeed as systems scale to the kind of size that is anticipated in the current decade. They propose instead a novel name-mapping scheme which relies on a fragmented, selectively replicated name translation database. Updates to the naming database are coordinated by an optimistic concurrency control strategy with automatic propagation and reconciliation. A prototype of the name-mapping mechanism has been implemented and is in use in the Ficus replicated file system.<<ETX>>
Recently, the analysis of small files is required to provide individual users with the latest information and optimized services. In this paper, we propose a distributed cache management scheme that considers cache metadata for efficient accesses of small files in Hadoop Distributed File Systems (HDFS). The proposed scheme can reduce the number of metadata managed by a Name Node since many small files are merged and stored in a chunk. It also reduces unnecessary accesses by keeping the requested files using clients and the caches of data nodes and by synchronizing the metadata in client caches according to communication cycles.
We propose a mechanism for improving the performance of name resolution operations in distributed file systems. The mechanism is based on the idea of reducing the number of message exchanges required for the resolution of a particular file name. In this mechanism, users are given the flexibility to dynamically define and change their performance requirements for the various file names being used by them.<<ETX>>
No standards are currently tagged "Distributed File Systems"
Siri - Operations Engineer
HPC Lustre System Software Developer
Lawrence Livermore National Laboratory
Lead Marketing Experimentation Data Scientist, Apple Media Products Data Science