2020
DOI: 10.48550/arxiv.2008.12586
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SAF: Simulated Annealing Fair Scheduling for Hadoop Yarn Clusters

Mahsa Ghanavatinasab,
Mastaneh Bahmani,
Reza Azmi

Abstract: Apache introduced YARN as the next generation of the Hadoop framework, providing resource management and a central platform to deliver consistent data governance tools across Hadoop clusters. Hadoop YARN supports multiple frameworks like MapReduce to process different types of data and works with different scheduling policies such as FIFO, Capacity, and Fair schedulers. DRF is the best option that uses short-term, without considering history information, convergence to fairness for multi-type resource allocati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 28 publications
0
1
0
Order By: Relevance
“…The default policies in the YARN scheduler has a dominant resource fairness policy (DRF) which is FAIR policy with the high weightage given to the most resource-demanding node. 7 It keeps an eye on the minimum required resources by each instance. If any instance has already allotted the minimum required share (minshare) of resources, then the request is dropped irrespective of priority.…”
Section: Introductionmentioning
confidence: 99%
“…The default policies in the YARN scheduler has a dominant resource fairness policy (DRF) which is FAIR policy with the high weightage given to the most resource-demanding node. 7 It keeps an eye on the minimum required resources by each instance. If any instance has already allotted the minimum required share (minshare) of resources, then the request is dropped irrespective of priority.…”
Section: Introductionmentioning
confidence: 99%