2020
DOI: 10.1155/2020/7630275
|View full text |Cite
|
Sign up to set email alerts
|

Task Offloading with Power Control for Mobile Edge Computing Using Reinforcement Learning-Based Markov Decision Process

Abstract: This paper proposes an efficient computation task offloading mechanism for mobile edge computing (MEC) systems. The studied MEC system consists of multiple user equipment (UEs) and multiple radio interfaces. In order to maximize the number of UEs benefitting from the MEC, the task offloading and power control strategy for a UE is optimized in a joint manner. However, the problem of finding the optimal solution is NP-hard. We then reformulate the problem as a Markov decision process (MDP) and develop a reinforc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…This approach is the one adopted also in [10], [22], [9], [25], [19], [31] and [20], as the authors have treated task offloading in conjunction with the problem of resource allocation in resource constrained networks. Task offloading can be approached as a Markov Decision Process (MDP) problem as suggested by the authors in [8], [34], [29], [32], [26], [7], [8], [33] and [15]. To determine the optimal policy for the MDP problem several works, [32], [26], [15], have implemented strategies based on Q-learning, a classical reinforcement learning algorithm.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…This approach is the one adopted also in [10], [22], [9], [25], [19], [31] and [20], as the authors have treated task offloading in conjunction with the problem of resource allocation in resource constrained networks. Task offloading can be approached as a Markov Decision Process (MDP) problem as suggested by the authors in [8], [34], [29], [32], [26], [7], [8], [33] and [15]. To determine the optimal policy for the MDP problem several works, [32], [26], [15], have implemented strategies based on Q-learning, a classical reinforcement learning algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Task offloading can be approached as a Markov Decision Process (MDP) problem as suggested by the authors in [8], [34], [29], [32], [26], [7], [8], [33] and [15]. To determine the optimal policy for the MDP problem several works, [32], [26], [15], have implemented strategies based on Q-learning, a classical reinforcement learning algorithm. Meanwhile this algorithm suffers from the same deficits of resolution time as the heuristic methods.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, Gaudette et al proposed to use arbitrary polynomial chaos expansions to consider the effect of various uncertainties on mobile user experience [33]. Other computation offloading techniques also consider the performance variability aspect in the mobile environment for energy efficiency optimization [2,3,21,47,59,61,62,92,99,131,134]. While the aforementioned techniques addressed similar runtime variance in the edge-cloud execution environment, prior works are sub-optimal for FL because of the highly distributed nature of FL use cases-not only that system and data heterogeneity can easily degrade the quality of FL, but runtime variance can also introduce uncertainties in FL's training time performance and execution efficiency.…”
Section: Related Workmentioning
confidence: 99%