SAF: Simulated Annealing Fair Scheduling for Hadoop Yarn Clusters

08/28/2020
by   Mahsa Ghanavatinasab, et al.
0

Apache introduced YARN as the next generation of the Hadoop framework, providing resource management and a central platform to deliver consistent data governance tools across Hadoop clusters. Hadoop YARN supports multiple frameworks like MapReduce to process different types of data and works with different scheduling policies such as FIFO, Capacity, and Fair schedulers. DRF is the best option that uses short-term, without considering history information, convergence to fairness for multi-type resource allocation. However, DRF performance is still not satisfying due to trade-offs between fairness and performance regarding resource utilization. To address this problem, we propose Simulated Annealing Fair scheduling, SAF, a long-term fair scheme in resource allocation to have fairness and excellent performance in terms of resource utilization and MakeSpan. We introduce a new parameter as entropy, which is an approach to indicates the disorder in the fairness of allocated resources of the whole cluster. We implemented SAF as a pluggable scheduler in Hadoop Yarn Cluster and evaluated it with standard MapReduce benchmarks in Yarn Scheduler Load Simulator (SLS) and CloudSim Plus simulation framework. Finally, the results of both simulation tools are evidence to prove our claim. Compared to DRF, SAF increases resource utilization of YARN clusters significantly and decreases MakeSpan to an appropriate level.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset