2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS) 2015
DOI: 10.1109/retis.2015.7232871
|View full text |Cite
|
Sign up to set email alerts
|

Workload characteristics and resource aware Hadoop scheduler

Abstract: Hadoop MapReduce is one of the largely used platforms for large scale data processing. Hadoop cluster has machines with different resources, including memory size, CPU capability and disk space. This introduces challenging research issue of improving Hadoop's performance through proper resource provisioning. The work presented in this paper focuses on optimizing job scheduling in Hadoop. Workload Characteristic and Resource Aware (WCRA) Hadoop scheduler is proposed, that classifies the jobs into CPU bound and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 16 publications
(16 reference statements)
0
6
0
Order By: Relevance
“…3) Workload Characteristic and Resource Aware Scheduler: In this paper, the authors propose WCRA-scheduling of Hadoop clusters (Workload Characteristic and Resource Aware) [12]. WCRA-scheduling checks the CPU, RAM and I/O-load on the nodes first.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…3) Workload Characteristic and Resource Aware Scheduler: In this paper, the authors propose WCRA-scheduling of Hadoop clusters (Workload Characteristic and Resource Aware) [12]. WCRA-scheduling checks the CPU, RAM and I/O-load on the nodes first.…”
Section: Related Workmentioning
confidence: 99%
“…The work bears some similarity to [7], but also embraces RAM as an important parameter, ensuring that more than 25% of the primary memory is always available before scheduling a job. The authors argue that "is critical in case of CPU and Disk I/O bound tasks" [12]. It was found that "compute node works significantly if it has the available physical memory greater than 25%.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…So in AES-MR each map function is performed in such a way that it decrypts the entire encrypted file, chunk by chunk in the Map phase and forwards the decrypted result to the reducer phase to get the original file. [3].…”
Section: (2) Decryptionmentioning
confidence: 99%
“…The trend for larger data sets is due to the additional information which is derivable from analysis of one large set of related data, as compared to separate smaller sets with the same total amount of data. Big data is difficult to work with the most relational database management systems and desktop statistics and visualization packages, instead they require massively parallel softwares running on tens, hundreds, or thousands of servers [3].…”
Section: Introduction a Big Datamentioning
confidence: 99%