2020
DOI: 10.1109/tii.2019.2953932
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous Power Management With Double-QReinforcement Learning Method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(22 citation statements)
references
References 22 publications
0
22
0
Order By: Relevance
“…In the training process, sub-tasks are trained by a double Qlearning separately. A double Q-learning is used to tackle the problem of traditional over-estimation in reinforcement learning systems involving the maximization operation and it is often suggested that this algorithm outperforms the traditional RL methods [33]. For each sub-task using a double Q-learning, an alternative double estimator is used to calculate the estimate for the maximum value function and a limit double Q-learning can converge to the optimal policy.…”
Section: The Training Methods For Sub-tasks Using the Double Q-lementioning
confidence: 99%
“…In the training process, sub-tasks are trained by a double Qlearning separately. A double Q-learning is used to tackle the problem of traditional over-estimation in reinforcement learning systems involving the maximization operation and it is often suggested that this algorithm outperforms the traditional RL methods [33]. For each sub-task using a double Q-learning, an alternative double estimator is used to calculate the estimate for the maximum value function and a limit double Q-learning can converge to the optimal policy.…”
Section: The Training Methods For Sub-tasks Using the Double Q-lementioning
confidence: 99%
“…With the continuous scaling and growing power density, power management has been a concerning topic for microprocessors to achieve good energy efficiency [1][2][3]. This is especially critical for battery-powered mobile devices, which are featured with frequent power state transition and management for varying workloads [4,5].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, deep learning is found as another effective alternative to efficiently model very complex physical details. Thus, several researchers have proposed the deployment of different deep learning neural networks to enable DVFS [1,3]. However, how to efficiently incorporate such models into an actual system, while keeping flexibility of switching between different targets, remains unclear.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Energy efficiency is the foremost challenge while sustaining the performance of the concurrently running applications in mobile edge computing environment. The first article entitled "Autonomous power management with double-Q reinforcement learning method" by Huang et al [1] proposes a double-Q power management approach to overcome the static operation frequency scaling policies of traditional dynamic voltage and frequency scaling using learning for extended energy sustainability. The experimental results depict that the random selection and updation of one of the two Q-tables for storage and measurement, respectively, reduces overestimation, thereby saving energy in comparison to the on-demand, conservative, and Q learning-based methods.…”
mentioning
confidence: 99%