Vital Hadoop tools for crunching Big Data
Today, the most popularly term in IT world is ‘Hadoop’. Within a short span of time, Hadoop has grown massively and has proved to be useful for a large collection of diverse projects. The Hadoop community is fast evolving and has very prominent role in its eco-system.
Here is a look at the essential tools and codes that comes under the collective heading, ’Hadoop’.
Hadoop:
When we think of Hadoop, the first thing that comes to our mind is the ‘map’ and ‘reduce’ tools. Generally, the entire group of map and reduce tools are termed as Hadoop, but the small pile of code at the center is referred as Hadoop, which is licensed under Apache. These codes are Java based and they synchronize worker nodes in executing a function stored locally. The results are then aggregated and reported. In the above process, the first step of ‘Aggregation’ is called as ‘Map’ and the second step of ‘Reporting’ is called as ‘Reduce.’ Hadoop allows programmers to concentrate on writing code for data analysis. Hadoop is also designed to work around faults and errors that are expected by individual machines.
Ambari:
Ambari is an Apache project supported by Hortonworks. It offers a web based GUI (Graphical User Interface) with wizard scripts for setting up clusters with most of the standard components. Ambari provisions, manages and monitors all the clusters of Hadoop jobs.
HDFS (Hadoop Distributed File System):
The HDFS, distributed under Apache license offers a basic framework for splitting up data collections between multiple nodes. In HDFS, the large files are broken into blocks, where several nodes hold all of the blocks from a file. The file system is designed in a way to mix fault tolerance with high throughput. The blocks of HDFS are loaded to maintain steady streaming. They are not usually cached to minimize latency.
HBase is a column-oriented database management system that runs on top of HDFS. HBase applications are written in Java, very much like the MapReduce application. It comprises a set of tables, where each table contains rows and columns like a traditional database. When the data falls into the big table, HBase will store the data, search it and automatically share the table across multiple nodes so that MapReduce jobs can run it locally. HBase offers a limited guarantee for some local changes. The changes that happen in a single row can succeed or fail at the same time.
Hive:
If you are already fluent with SQL, then you can leverage Hadoop using Hive. Hive was developed by some folks at Facebook. Apache Hive regulates the process of extracting bits from all the files in HBase. It supports analysis of large datasets stored in Hadoop’s HDFS and compatible file systems. It also provides an SQL like language called HSQL (HiveSQL) that gets into the files and extracts the required snippets for the code.
Apache Sqoop is specially designed to transfer bulk data efficiently from the traditional databases into Hive or HBase. It can also be used to extract data from Hadoop and export it to external structured data-stores like relational databases and enterprise data warehouses. Sqoop is a command line tool, mapping between the tables and the data storage layer, translating the tables into a configurable combination of HDFS, HBase or Hive.
Pig:
When the data stored is visible to Hadoop, Apache Pig dives into the data and runs the code that is written in its own language, called Pig Latin. Pig Latin is filled with abstractions for handling the data. Pig comes with standard functions for common tasks like averaging data, working with dates, or to find differences between strings. Pig also allows the user to write languages of their own, called UDF (User Defined Function), when the standard functions fall short.
Zookeeper is a centralized service that maintains, configures information, gives a name and provides distributed synchronization across a cluster. It imposes a file system-like hierarchy on the cluster and stores all of the metadata for the machines, so we can synchronize the work of the various machines.
Some Hadoop clusters integrate with NoSQL data stores that come with their own mechanisms for storing data across a cluster of nodes. This allows them to store and retrieve data with all the features of the NoSQL database, after which Hadoop can be used to schedule data analysis jobs on the same cluster.
Mahout is designed to implement a great number of algorithms, classifications and filtering of data analysis to Hadoop cluster. Many of the standard algorithms like K-means, Dirichelet, parallel pattern and Bayesian classifications are ready to run on the data with a Hadoop style Map and reduce.
Lucene, written in Java integrates easily with Hadoop and is a natural companion for Hadoop. It is a tool meant for indexing large blocks of unstructured text. Lucene handles the indexing, while Hadoop handles the distributed queries across the cluster. Lucene-Hadoop features are rapidly evolving as new projects are being developed.
Avro:
Avro is a serialization system that bundles the data together with a schema for understanding it. Each packet comes with a JSON data structure. JSON explains how the data can be parsed. The header of JSON specifies the structure for the data, where the need to write extra tags in the data to mark the fields can be avoided. The output is considerably more compact than the traditional formats like XML.
A job can be simplified by breaking it into steps. On breaking the project in to multiple Hadoop jobs, Oozie starts processing them in the right sequence. It manages the workflow as specified by DAG (Directed Acyclic Graph) and there is no need for timely monitor.
GIS Tools:
Working with geographic maps is a big job for clusters running Hadoop. The GIS (Geographic Information System) tools for Hadoop projects have adapted best Java based tools for understanding geographic information to run with Hadoop. The databases can now handle geographic queries using coordinates and the codes can deploy the GIS tools.
Gathering all the data is equal to storing and analyzing it. Apache Flume dispatches ‘special agents’ to gather information that will be stored in HDFS. The information gathered can be log files, Twitter API, or website scraps. These data can be chained and subjected to analyses.
Spark is the next generation that pretty much works like Hadoop that processes data cached in the memory. Its objective is to make data analysis fast to run and write with a general execution model. This can optimize arbitrary operator graphs and support in-memory computing, which lets it query data faster than disk-based engines like Hadoop.
SQL on Hadoop:
When it’s required to run a quick ad-hoc query of all the data in the cluster, a new Hadoop job can be written, but this takes some time. When programmers started doing this more often, they came up with tools written in the simple language of SQL. These tools offer quick access to the results.