A Survey of Hadoop Schedulers

A Survey of Hadoop Schedulers

Introduction

In this post, we’ll examine a few Hadoop schedulers that could offer performance improvements.  The use of big data to drive decision making has exploded over the past decade.  Organizations with big data clusters have growing user bases as queries drive decision making across all departments.  Constant querying can leave a cluster resource constrained, and much attention has been placed on improving Hadoop performance for enterprise use.  Research into Hadoop scheduling algorithms has shown major improvements in large volume processing, but there is no ubiquitous scheduling solution. Selecting an appropriate scheduler is dependent on the individual Hadoop cluster and the jobs it processes.

When analyzing Hadoop’s scheduling performance, data locality is thought to be an important benchmark.  Data locality describes the proximity of the task to the data it needs to process. If the task is scheduled to a node that already has the data it will process, this is considered data local.  If the task and data are located on the same rack but not the same node, this is considered rack local.  If neither of these metrics are met, this is considered non-local.  Moving data is a heavy burden on resources and time, so a data local task is most optimal.

 

The Default Scheduler: FIFO (First In First Out)

Hadoop’s default scheduler follows the queue management method of First In, First Out with some logic to maximize data locality.  When a task reaches the front of the scheduling queue, the scheduler queries available nodes for any that have the data set needed to complete the task.  If found, the task will be assigned to that node.  If not, the task will be assigned to a node without the data.  

Configurations to Hadoop’s settings can offer improvements to FIFO. For example, setting the delay interval will increase the chance a node with optimal data locality can be found.  When set, the delay interval will cause the scheduler to wait a time period before settling on an nonoptimal node.  After the delay expires, the scheduler will again check if a node with local data has become available.  If so, the task will be assigned to the optimal node.  If not, a nonoptimal node will be selected.

LSAP Scheduler

Proposed by researchers at Indiana University, the LSAP scheduler uses Linear Sum Assignment Problem (LSAP) methodologies to find the most optimal node.  This methodology assigns a cost of each task in the queue as if it were to be assigned to every available node and then chooses the lowest cost assortment. The cost assignment for this scheduler could use any metric relevant to Hadoop performance.  In the the research at IU,  a cost was given to each node and task assignment based on the availability of the data used in processing.  This differs from FIFO by analyzing all possible tasks in the queue and their optimal node assignments, instead of simply analysing the first in the queue.  The most optimal assortment would typically contain the most data local nodes.

Researchers saw significant improvements in performance using this scheduler.  While controlling for number of tasks, replication factor, and number of nodes, the LSAP scheduler yielded results that were around 15% better than that of the default Hadoop scheduler.

Network-Aware Scheduler

Distributed Hadoop clusters are a common implementation, meaning nodes considered for processing the same tasks may be thousands of miles apart.  Because of this, a node that is non-local but in the same physical location as its data may be a more efficient node than one in a different data warehouse altogether.  Choosing the node in the same data warehouse would eliminate moving data over the open internet, which can be a costly process.

Researchers at Louisiana State developed a scheduling algorithm that considered distributed networks in its processing.  Their scheduler was an extension of the default FIFO scheduler and followed many of the same processes, except that it avoided moving data across networks whenever possible.  When running tests with their network-aware scheduler, researchers saw improvements over the default FIFO scheduler.  When attempting to process around 1,000 tasks, the network-aware clusters processed all jobs about 1,000 seconds faster.  

CASH: Context Aware Scheduler

When a node fails, organizations often use whatever hardware is cost effective at the time of replacement.  This practice leads to differing nodes in a cluster.  This is called a heterogeneous cluster.  These variations in hardware can cause the nodes to perform at different levels.  For instance, some nodes may be more adept to computational processing, while others are more adept to writing to the disk.  A Hadoop cluster that is aware of these variations in performance has the potential to be more efficient.  

Researchers at Sri Sathya Institute of Higher Learning developed a scheduling algorithm that took node and job variations into consideration when assigning tasks to nodes.  The scheduler follows the logic that a node should be assigned a task for which it is most optimized.  For example, a node that is high in computational performance should receive a task that is CPU intensive.  This scheduler assumes that jobs (and thereby, tasks) are periodical.  The scheduler uses initial runs to classify jobs as CPU or IO intensive.  The scheduler also uses a benchmarking program to grade each individual node on its computational and disk writing abilities.  

When running this scheduler on a Hadoop cluster with varied node quality and task classifications, the scheduler showed great improvements.  The ideal situation, where task classification and node classification were identical, showed improvements over 30% in performance.  Even the least optimal situation, where tasks and node classifications were mismatched, had improvements of up to 20% above FIFO.

Conclusions

Most enterprise Hadoop clusters would benefit from an advanced scheduling algorithm; however, this must be customized for each implementation.  To implement a new scheduler, an enterprise would need a skilled technician that can analyze the delays and inefficiencies caused by the default scheduler.

One point of enhancement would be to combine scheduling styles to maximize efficiency for the right clusters.  For instance, the network-aware scheduler is an extension of the FIFO scheduler, but if it used the LSAP model to score the cost on multiple nodes for multiple tasks, even greater improvements could be seen for data processing on distributed clusters.

Leave a Reply

Your email address will not be published. Required fields are marked *