2011 Sixth Open Cirrus Summit 2011
DOI: 10.1109/ocs.2011.6
|View full text |Cite
|
Sign up to set email alerts
|

Distributed, Robust Auto-Scaling Policies for Power Management in Compute Intensive Server Farms

Abstract: Abstract-Server farms today often over-provision resources to handle peak demand, resulting in an excessive waste of power. Ideally, server farm capacity should be dynamically adjusted based on the incoming demand. However, the unpredictable and time-varying nature of customer demands makes it very difficult to efficiently scale capacity in server farms. The problem is further exacerbated by the large setup time needed to increase capacity, which can adversely impact response times as well as utilize additiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…The methods aiming to reduce data center power consumption could be classified into four approaches [1]: powerproportionality, which attempt to guarantee that servers consuming power in proportion to their utilization [2]- [4]; energyefficient server design, which attempt to determine the proper server architecture for a given workload [5]- [7]; dynamic server provisioning, which attempts to determine the times when the servers should be kept on or off [8], [9]; consolidation and virtualization, which attempts to reduce the power consumption by resource sharing [10]- [12]. We refer the reader to [13]- [17] for further literature related to our work.…”
Section: A Related Workmentioning
confidence: 99%
“…The methods aiming to reduce data center power consumption could be classified into four approaches [1]: powerproportionality, which attempt to guarantee that servers consuming power in proportion to their utilization [2]- [4]; energyefficient server design, which attempt to determine the proper server architecture for a given workload [5]- [7]; dynamic server provisioning, which attempts to determine the times when the servers should be kept on or off [8], [9]; consolidation and virtualization, which attempts to reduce the power consumption by resource sharing [10]- [12]. We refer the reader to [13]- [17] for further literature related to our work.…”
Section: A Related Workmentioning
confidence: 99%
“…Power state transitioning of servers' processor involves a transition time latency or time overhead, which is highly processor dependent and this could vary from one processor type to another ( Table 4). The server stays in an interim power state called SETUP state for the period between each such transition and consumes power called SETUP power [2].…”
Section: Server State Transition Approachmentioning
confidence: 99%
“…Interim server processor state during the state transition is referred to as SETUP power state. Power drawn during wake-up and shutdown transition time period can be considered equal to the server power consumption at highest utilization load [2]. To improve datacenters' power efficiency, servers are to be well utilized and when not in use are to be switched OFF or transitioned to low power SLEEP states, hence the importance of server consolidation and processor power state transition management.…”
Section: Introductionmentioning
confidence: 99%
“…In DRAS, the authors explored policies for load balancing requests across servers and deciding when to power idle servers off, assuming computationally intensive workloads [10], demonstrating that a reduction in average power can be achieved with a slight penalty in average latency. Similar observations were made for the Salsa web server [8], though they also demonstrated that request batching could further save energy.…”
Section: Power Savingmentioning
confidence: 99%