Using Scala To Work With Hadoop
Cloudera has a great toolkit to work with Hadoop. Â Specifically it is focused on building distributed systems and services on top of the Hadoop Ecosystem.
http://cloudera.github.io/cdk/docs/0.2.0/cdk-data/guide.html
And the examples are in Scala!!!!
Here is how you you work with generic stuff on the ...
MongoDB 2.6 Released
In the five years since the initial release of MongoDB, and after hundreds of thousands of deployments, we have learned a lot. The time has come to take everything we have learned and create a basis for continued innovation over ...
How To Choose The Best Tool For Your Big Data Project
Trying to choose the right tool for a big data project? This chart (and three simple rules) can help guide you through the options. This chart is based on one shown by Microsoft Research senior research program manager Wenming Ye ...
10 Big Data Analytics Use Cases for Healthcare IT
Big data means a lot of things to a lot of different people, but what is becoming increasingly clear as the largest market players strategies start to unfold, big data is about real-time analysis and data driven decision-making. Now Big ...
Impala and SQL on Hadoop
The origins of Impala can be found in F1 – The Fault-Tolerant Distributed RDBMS Supporting Google’s Ad Business.
One of many differences between MapReduce and Impala is in Impala the intermediate data moves from process to process directly instead of storing it ...
Pivotal Launches Its Big Data Suite
When Pivotal was spun out of VMware and EMC, many people were excited about a well-funded entity, chock full of some of the coolest modern tech, and without the hang-ups of having to think about existing products or revenue streams. ...
Selecting the right SQL-on-Hadoop engine to access big data
With SQL-on-Hadoop technologies, it's possible to access big data stored in Hadoop by using the familiar SQL language. Users can plug in almost any reporting or analytical tool to analyze and study the data. Before SQL-on-Hadoop, accessing big data was ...
Configure Eclipse for MapReduce
1. Download load eclipse Europa or Indigo
2. Download Hadoop eclipse plugin eg: hadoop-eclipse-plugin-1.0.3.jar
3. Copy jar in eclipse plugin folder
4. Open eclipse
5. Add Map/Reduce server
6. Add New DFS Location
Location name: localhost
Map/Reduce Master:
port: 9001
DFS Master
port: 9000
Finish
7. New -> others -> Map/Reducer Project
-> ...
A guide to NoSQL offerings
Amazon Web Services: DynamoDB is a NoSQL database service that makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic. Users simply tell the service how many requests need to ...
Apache Mahout is moving on from MapReduce
Apache Mahout, a machine learning library for Hadoop since 2009, is joining the exodus away from MapReduce. The project’s community has decided to rework Mahout to support the increasingly popular Apache Spark in-memory data-processing framework, as well as the H2O engine for ...






