Hadoop_elephantsBig data is one of the most trending technologies of the decade. If you know big data, you would have heard about Hadoop. But, if you don’t know Hadoop, you have landed on the right page. Here we will cover basics about Hadoop and its architecture.

Hadoop is complex and you will come across different terms. Before starting to work on Hadoop, you would need a clear understanding of these terms and that is exactly what we have here for you. While some are independent Software created to integrate within the Hadoop framework, some are a part of the Hadoop architecture.

What Is Hadoop Distributed File System (HDFS)?

You will come across this term very frequently. An HDFS is a storage system that is spread in the Hadoop framework. Being a data repository, it stores data and grants access to it wherever required. In terms of the HDFS architecture, NameNodes and DataNodes are two prominent aspects. It is generally the default storage system in Hadoop ecosystem with a major role to play in access of the data to the application.

What Is Hadoop Common?

As the name suggest, Hadoop Common acts a central library with utilities. These utilities facilitate the working of modules which are communicate to transfer information. Hadoop Common is an integral part of the Hadoop ecosystem. But, its usage is limited to developers who are involved in programming.

What Is HBase?

HBase is a short variant for Hadoop database. It acts a storage unit but this is not to be confused with HDFS. An HDFS is the underlying system which HBase operates on. The advantage of using HBase is that it allows users to read and modify data in real-time. It is also known as column-oriented database because of the way data is structured.

What Is MapReduce?

MapReduce is a core component of the Hadoop ecosystem. It enables processing of large data sets. The reason for MapReduce’s popularity is its ability to process unstructured data. It is compatible with almost all popular programming languages; but, the preferred language remains to be Java. MapReduce is often characterized as a fault-tolerant system because it works in parallel on multiple clusters.

What Is Hadoop YARN?

YARN stands for Yet Another Resource Negotiator. It is a framework that helps in managing resources and creating schedules. YARN data sets can be processed using MapReduce. This has proved to be an important component in Hadoop 2.0.

While working with Hadoop, you will also be required to familiarize yourself with Apache Hive, Apache Pig, Apache Spark, Apache Cassandra, etc. These are all Software working as a framework, database, platform etc. to support Hadoop. A little detail would be required to understand how each of these can be integrated with Hadoop and we will cover these in another blog.

Hadoop is emerging as a pioneer in big data solutions. This rise in use of Hadoop has opened up new opportunities for service providers and institutions that provide Hadoop coaching. With Hadoop rising, all associated with it are sure to benefit.

Also read:

Top 12 Hadoop Technology Companies

The Biggest Challenge of Hadoop Analytics: It’s all about Query Performance

Relation Between Big Data Hadoop and Cloud Computing

What Is Hadoop, And How Does It Relate To Cloud?

A Guide to Become a Successful Hadoop Developer in 2022

How To Kick Start Your Career With Hadoop And Big Data Training?

13 Reasons Why System/Data Administrators should do Hadoop Training

Top 10 Tips for Hadoop Administration for Starters


The Apache Hive™ is a data warehouse software that facilitates reading, writing, and managing large datasets residing in distributed storage using SQL on top of Apache Hadoop and is part of the larger Hadoop Ecosystem.  Hive can be used on-premises and in the cloud with a variety of storage mediums, including HDFS, Azure cloud storage, Amazon Web Services S3 object storage, and Google Cloud Storage.

Apache Pig

Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.

At the present time, Pig’s infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject).

Apache Spark

Apache Spark is a data processing framework that can quickly perform processing tasks on very large data sets, and can also distribute data processing tasks across multiple computers, either on its own or in tandem with other distributed computing tools. These two qualities are key to the worlds of big data and machine learning, which require the marshalling of massive computing power to crunch through large data stores. Spark also takes some of the programming burdens of these tasks off the shoulders of developers with an easy-to-use API that abstracts away much of the grunt work of distributed computing and big data processing.

Apache Cassandra

Apache Cassandra is the only distributed NoSQL database that delivers the always-on availability, blisteringly fast read-write performance, and unlimited linear scalability needed to meet the demands of successful modern applications.

Impala: An SQL query engine with massive parallel processing (MPP) power, running natively on the Apache Hadoop framework.

Flume: A service for collecting, aggregating, and moving large amounts of log and event data into Hadoop.

HiveQL (HQL): A SQL like query language for Hadoop used to execute MapReduce jobs on HDFS.

JobTracker: the service within Hadoop which distributes MapReduce tasks to specific nodes in the cluster.

HUE: A browser-based desktop interface for interacting with Hadoop.

NameNode: The core of the HDFS file system.

Oozie: A workflow engine for Hadoop.

Sqoop: A tool designed to transfer data between Hadoop and relational databases.

Whirr: A set of libraries for running cloud services.

ZooKeeper: Allows Hadoop administrators to track and coordinate distributed applications.

The post is by Joseph Macwan is technical writer with a keen interest in business, technology and marketing topics. He is also associated with Aegis softwares which offers apache hadoop development services.