2015
DOI: 10.1142/s0129626415410066
|View full text |Cite
|
Sign up to set email alerts
|

“Big, Medium, Little”: Reaching Energy Proportionality with Heterogeneous Computing Scheduler

Abstract: International audienceEnergy savings are among the most important topics concerning Cloud and HPC infrastructuresnowadays. Servers consume a large amount of energy, even when their computingpower is not fully utilized. These static costs represent quite a concern, mostlybecause many datacenter managers are over-provisioning their infrastructures comparedto the actual needs. This results in a high part of wasted power consumption. In thispaper, we proposed the BML (“Big, Medium, Little”) infrastructure, compose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
18
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 14 publications
0
18
0
Order By: Relevance
“…The strong aspect is that it does not rely on a specific processor design. This idea has been introduced previously [9], and its feasibility has been studied [3]. The present work implements scheduling policies that take into account both benefits and drawbacks of using independent machines.…”
Section: Related Work On Energy Proportionalitymentioning
confidence: 99%
See 1 more Smart Citation
“…The strong aspect is that it does not rely on a specific processor design. This idea has been introduced previously [9], and its feasibility has been studied [3]. The present work implements scheduling policies that take into account both benefits and drawbacks of using independent machines.…”
Section: Related Work On Energy Proportionalitymentioning
confidence: 99%
“…Each machine type is profiled by running the target application, and the best resources combinations for all application performance rates are computed. Our previous work introduced the concept of heterogeneous energy proportional infrastructure, named "Big,Medium,Little" (BML) [3]. The present work extends it by providing a scheduler that handles dynamic reconfiguration decisions, which consist in dynamic resources management with switch on and off actions, whose time and energy overheads are taken into account, to achieve energy proportionality while respecting QoS requirements.…”
Section: Introductionmentioning
confidence: 99%
“…The strong aspect is that it does not rely on a specific processor design. This idea has been introduced previously [10], and its feasibility has been studied [3]. The present work implements a BML infrastructure with scheduling policies that take into account both benefits and drawbacks of using independent machines.…”
Section: Related Work On Energy Proportionalitymentioning
confidence: 99%
“…Each server type is profiled by running the target application, and the best server combinations for all application performance rates are then computed. Our previous work introduced the concept of a heterogeneous energy proportional infrastructure, named "Big,Medium,Little" (BML) [3]. The present work extends it by providing a scheduler that handles reconfiguration decisions, such as dynamic application migrations and management of computing resources with switch on and off actions.…”
Section: Introductionmentioning
confidence: 99%
“…is la er e ect is commonly referred to as the breakdown of Dennard's scaling, which forces designs to operate only a fraction of the whole system, leaving the remaining design in a dark silicon state [8,10]. To address this, heterogeneous multiprocessors (HMPs) have emerged over other designs, especially in mobile devices [17,33], where keeping within a strict energy envelope while still being able to deliver performance on demand is crucial.To be able to deliver high performance through Memory Level Parallelism (MLP) and instruction level parallelism (ILP), an Out-of-Order (OoO) core is commonly used that has large caches, does aggressive speculation (branch predictors, prefetchers) and masks memory latency at the cost of signi cantly increased design complexity, area and power requirements. On the other hand, In-Order (InO) cores aim at conserving energy through a simpler and smaller design, at the expense of performance and lower operating frequency.…”
mentioning
confidence: 99%