The Complete Hadoop EcoSystem – Distributed File System

Distributed File System:

 

DFS

Big Data is an all-inclusive term that refers to data sets so large and complex that they need to be processed by specially designed hardware and software tools. The data sets are typically of the order of tera or exabytes in size. These data sets are created from a diverse range of sources: sensors that gather climate information, publicly available information such as magazines, newspapers, articles. Other examples where big data is generated include purchase transaction records, web logs, medical records, military surveillance, video and image archives, and large-scale e-commerce.

There is a heightened interest in Big Data. Oceans of digital data are being created from the interaction between individuals, businesses, and government agencies. There are enormous benefits open to organisations providing they effectively identify, access, filter, analyze and select parts of this data.

Big Data demands the storage of a massive amount of data. This makes it a necessity for advanced storage infrastructure; a need to have a storage solution which is designed to scale out on multiple servers.

Please find the list of file systems which are available in the market.

Distributed Filesystem
Apache HDFS The Hadoop Distributed File System (HDFS) offers a way to store large files across multiple machines. Hadoop and HDFS was derived from Google File System (GFS) paper. Prior to Hadoop 2.0.0, the NameNode was a single point of failure (SPOF) in an HDFS cluster. With Zookeeper the HDFS High Availability feature addresses this problem by providing the option of running two redundant NameNodes in the same cluster in an Active/Passive configuration with a hot standby. 1. hadoop.apache.org
2. Google FileSystem – GFS Paper
3. Cloudera Why HDFS
4. Hortonworks Why HDFS
Red Hat GlusterFS GlusterFS is a scale-out network-attached storage file system. GlusterFS was developed originally by Gluster, Inc., then by Red Hat, Inc., after their purchase of Gluster in 2011. In June 2012, Red Hat Storage Server was announced as a commercially-supported integration of GlusterFS with Red Hat Enterprise Linux. Gluster File System, known now as Red Hat Storage Server. 1. www.gluster.org
2. Red Hat Hadoop Plugin
Quantcast File System QFS (QFS) is an open-source distributed file system software package for large-scale MapReduce or other batch-processing workloads. It was designed as an alternative to Apache Hadoop’s HDFS, intended to deliver better performance and cost-efficiency for large-scale processing clusters. It is written in C++ and has fixed-footprint memory management. QFS uses Reed-Solomon error correction as method for assuring reliable access to data.
Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors associated with media defects. Rather than storing three full versions of each file like HDFS, resulting in the need for three times more storage, QFS only needs 1.5x the raw capacity because it stripes data across nine different disk drives.
1. QFS site
2. GitHub QFS
3. HADOOP-8885
Ceph Filesystem Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph’s main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available. The data is replicated, making it fault tolerant. The problem right now is Ceph currently requires Hadoop 1.1.X stable series. 1. Ceph Filesystem site
2. Ceph and Hadoop
3. HADOOP-6253
Lustre file system The Lustre filesystem is a high-performance distributed filesystem intended for larger network and high-availability environments. Traditionally, Lustre is configured to manage remote data storage disk devices within a Storage Area Network (SAN), which is two or more remotely attached disk devices communicating via a Small Computer System Interface (SCSI) protocol. This includes Fibre Channel, Fibre Channel over Ethernet (FCoE), Serial Attached SCSI (SAS) and even iSCSI.
With Hadoop HDFS the software needs a dedicated cluster of computers on which to run. But folks who run high performance computing clusters for other purposes often don’t run HDFS, which leaves them with a bunch of computing power, tasks that could almost certainly benefit from a bit of map reduce and no way to put that power to work running Hadoop. Intel’s noticed this and, in version 2.5 of its Hadoop distribution that it quietly released last week, has added support for Lustre: the Intel® HPC Distribution for Apache Hadoop* Software, a new product that combines Intel Distribution for Apache Hadoop software with Intel® Enterprise Edition for Lustre software. This is the only distribution of Apache Hadoop that is integrated with Lustre, the parallel file system used by many of the world’s fastest supercomputers
1. wiki.lustre.org/
2. Hadoop with Lustre
3. Intel HPC Hadoop
Tachyon Tachyon is an memory distributed file system. By storing the file-system contents in the main memory of all cluster nodes, the system achieves higher throughput than traditional disk-based storage systems like HDFS. 1. Tachyon site
GridGain GridGain is open source project licensed under Apache 2.0. One of the main pieces of this platform is the In-Memory Apache Hadoop Accelerator which aims to accelerate HDFS and Map/Reduce by bringing both, data and computations into memory. This work is done with the GGFS – Hadoop compliant in-memory file system. For I/O intensive jobs GridGain GGFS offers performance close to 100x faster than standard HDFS. Paraphrasing Dmitriy Setrakyan from GridGain Systems talking about GGFS regarding Tachyon:
– GGFS allows read-through and write-through to/from underlying HDFS or any other Hadoop compliant file system with zero code change. Essentially GGFS entirely removes ETL step from integration.
– GGFS has ability to pick and choose what folders stay in memory, what folders stay on disc, and what folders get synchronized with underlying (HD)FS either synchronously or asynchronously.
– GridGain is working on adding native MapReduce component which will provide native complete Hadoop integration without changes in API, like Spark currently forces you to do. Essentially GridGain MR+GGFS will allow to bring Hadoop completely or partially in-memory in Plug-n-Play fashion without any API changes.
1. GridGain site

See also : The Complete Hadoop EcoSystem – Distributed Programming

Advertisements

One thought on “The Complete Hadoop EcoSystem – Distributed File System

  1. I am currently working on a mainframe migration project, which consists of BDAM files. Its a retail project used for analysis meaning lot of data will be loaded and fetched from BDAM files every hour/day. Can you please suggest me which DFS will be most efficient to implement this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s