2015
DOI: 10.1109/tpds.2014.2358556
|View full text |Cite
|
Sign up to set email alerts
|

Energy-Aware Scheduling of MapReduce Jobs for Big Data Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0
2

Year Published

2015
2015
2019
2019

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 117 publications
(41 citation statements)
references
References 33 publications
0
39
0
2
Order By: Relevance
“…Similar to the partition scheme in GreenHDFS [16], MIA studied mixed workloads on MapReduce by assigning different SLAs to services [6], allowing interactive jobs to operate on a small pool of dedicated machines while less time-sensitive jobs run on the rest of the cluster in a batch fashion. Mashayekhy et al [26] proposed schedulers to assign server slots to minimizing energy consumed when executing MapReduce jobs. In all above algorithms, MapReduce is deployed on physical servers, rather than VMs.…”
Section: Related Workmentioning
confidence: 99%
“…Similar to the partition scheme in GreenHDFS [16], MIA studied mixed workloads on MapReduce by assigning different SLAs to services [6], allowing interactive jobs to operate on a small pool of dedicated machines while less time-sensitive jobs run on the rest of the cluster in a batch fashion. Mashayekhy et al [26] proposed schedulers to assign server slots to minimizing energy consumed when executing MapReduce jobs. In all above algorithms, MapReduce is deployed on physical servers, rather than VMs.…”
Section: Related Workmentioning
confidence: 99%
“…Their proposed algorithm schedules jobs when electricity prices are sufficiently low and to places where the energy cost per unit work is low. Mashayekhy et al [18] proposed energy-aware scheduling algorithms for detailed task placement of MapReduce jobs. Their scheduling algorithms account for significant energy efficiency differences of different machines in a data center.…”
Section: Related Workmentioning
confidence: 99%
“…In the second phase, the algorithm finds the set of usersŨ, where each user's request is not greater than half of the available capacity of the PM p, for each resource by calling IS-FEASIBLE (lines [18][19][20][21][22]. Then, the algorithm calculates the bid densities of users inŨ (lines 23-24) according to a density metric defined as…”
Section: A Strategy-proof Approximation Mechanism For Pmrmmentioning
confidence: 99%
“…Because of its prominence, increasingly number of studies have focused on improving the performance of Hadoop, and most of which are related to either task scheduling or job scheduling [18]- [23]. These studies can be generally classified into the following categories:…”
Section: Related Workmentioning
confidence: 99%