Managing Multiple Resources in Hadoop 2 with YARN
As Apache Hadoop become ubiquitous, it is becoming more common for users to run diverse sets of workloads on Hadoop, and these jobs are more likely to have different resource profiles. For example, a MapReduce distcp job or Cloudera Impala query that does a simple scan on a large table may be heavily disk-bound and require little memory. Or, an Apache Spark (incubating) job executing an iterative machine-learning algorithm with complex updates may wish to store the entire dataset in memory and use spurts of CPU to perform complex computation on it.
For that reason, the new YARN framework in Hadoop 2 allows workloads to share cluster resources dynamically between a variety of processing frameworks, including MapReduce, Impala, and Spark. YARN currently handles memory and CPU and will coordinate additional resources like disk and network I/O in the future.
Accounting for memory, CPU, and other resources separately confers several advantages:
- It allows us to treat tenants on a Hadoop cluster more fairly by rationing the resources that are most utilized at a point in time.
- It makes resource configuration more straightforward, because a single resource does not need to be used as a proxy for others.
- It provides more predictable performance by not oversubscribing nodes, and protects higher-priority workloads with better isolation.
- Finally, it can increase cluster utilization because all the above mean that resource needs and capacities can be configured less conservatively.
One of Cloudera’s top priorities in Cloudera Enterprise 5 (in beta at the time of writing) is to provide smooth and powerful resource management functionality on Hadoop via YARN. In this post, I’ll describe the work we’ve done recently to allow YARN to support multiple resources, from multi-resource scheduling with Dominant Resource Fairness in the Fair Scheduler to enforcement on the node level with cgroups. The changes discussed below are included in YARN/MR2 in CDH 4.4 and going forward. by Sandy Ryza read more