Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Syst 2017
DOI: 10.1145/3037697.3037717
|View full text |Cite
|
Sign up to set email alerts
|

Voltage Regulator Efficiency Aware Power Management

Abstract: Conventional off-chip voltage regulators are typically bulky and slow, and are inefficient at exploiting system and workload variability using Dynamic Voltage and Frequency Scaling (DVFS). On-die integration of voltage regulators has the potential to increase the energy efficiency of computer systems by enabling power control at a fine granularity in both space and time. The energy conversion efficiency of on-chip regulators, however, is typically much lower than off-chip regulators, which results in significa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…It uses RL (i.e., the multi-armed bandit approach) to identify the most energy efficient system configuration, and given this configuration it further determines the application configuration satisfying the energy goal with maximized accuracy. Bai et al [10] considers the on-chip regulator efficiency loss during DVFS, trying to minimize the energy under a parameterized performance constraint. The online control policy is implemented by a table-based Q-learning, portable across platforms without accurate modeling of a specific system.…”
Section: Resource Allocation or Managementmentioning
confidence: 99%
“…It uses RL (i.e., the multi-armed bandit approach) to identify the most energy efficient system configuration, and given this configuration it further determines the application configuration satisfying the energy goal with maximized accuracy. Bai et al [10] considers the on-chip regulator efficiency loss during DVFS, trying to minimize the energy under a parameterized performance constraint. The online control policy is implemented by a table-based Q-learning, portable across platforms without accurate modeling of a specific system.…”
Section: Resource Allocation or Managementmentioning
confidence: 99%
“…This approach therefore targeted longer executing workloads, but can provide more than 24% energy savings over the next best approach. Bai et al [76] implemented a RL-based DVFS control policy adapted to a novel voltage regulator hierarchy using offchip switching regulators and on-chip linear regulators. Individual RL agents adapt to a dynamically allocated power budget determined by a heuristic bidding approach.…”
Section: System-level Optimizationmentioning
confidence: 99%
“…Chen and Marculescu [78] (later Chen et al [79]) explored an alternative two-level strategy for RLbased DVFS. Similar to Bai et al [76], they used RL agents at a fine-grain core level to select a V/F level based on an allocated share of the global power budget. They achieved further improvement by allocating power budget using a performance-aware, albeit still heuristic-based, variant that considers relative application performance requirements.…”
Section: System-level Optimizationmentioning
confidence: 99%
“…The variable γ (where 0≤ γ ≤1) in this equation is the discount rate, which determines the impact of future rewards on the total return: as γ approaches 1, the agent becomes less near-sighted by giving more weight to future rewards. Additionally, an ε-greedy policy is also applied to explore unvisited regions of the state-action space [15,31,32]. A detailed discussion of how the parameters γ and ε impact system-level performance is presented in Section 7.3.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Table 1 describes the simulation parameters used. The selection of RL parameters (such as α, γ, and ε) can impact the performance of the trained control policy [31,32,42]. We tune the discount rate γ and exploration probability ε on blackscholes benchmark from PARSEC, resulting in γ = 0.9 and ε = 0.05.…”
Section: Simulation Setupmentioning
confidence: 99%