2022
DOI: 10.1109/tmc.2021.3059691
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning Based Dynamic Trajectory Control for UAV-Assisted Mobile Edge Computing

Abstract: In this paper, we consider a platform of flying mobile edge computing (F-MEC), where unmanned aerial vehicles (UAVs) serve as equipment providing computation resource, and they enable task offloading from user equipment (UE). We aim to minimize energy consumption of all UEs via optimizing user association, resource allocation and the trajectory of UAVs. To this end, we first propose a Convex optimizAtion based Trajectory control algorithm (CAT), which solves the problem in an iterative way by using block coord… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
76
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 158 publications
(76 citation statements)
references
References 37 publications
0
76
0
Order By: Relevance
“…In [18], a hierarchical DRL algorithm was developed to minimize the average delay of tasks by jointly optimizing the movement locations of SDs and offloading decisions. To minimize the energy consumption of all SDs, Wang et al [19] presented a trajectory control method based on DDPG with prioritized experience replay. Dai et al [20] considered a UAV-and-BS enabled MEC system and devised a DDPGbased task association scheduling method to minimize the system's energy consumption.…”
Section: Single-objective Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…In [18], a hierarchical DRL algorithm was developed to minimize the average delay of tasks by jointly optimizing the movement locations of SDs and offloading decisions. To minimize the energy consumption of all SDs, Wang et al [19] presented a trajectory control method based on DDPG with prioritized experience replay. Dai et al [20] considered a UAV-and-BS enabled MEC system and devised a DDPGbased task association scheduling method to minimize the system's energy consumption.…”
Section: Single-objective Optimizationmentioning
confidence: 99%
“…Similar to previous studies [19], [31], we adopt the Cartesian coordinate system to model the movement of the UAV.…”
Section: Uav Movement Modelmentioning
confidence: 99%
“…To guarantee global load balance at the UAVs in [22], a problem of minimum global load balance deployment is formulated by jointly optimizing task scheduling and deployment of UAVs under coverage constraints. Deep Reinforcement Learning (DRL) [72], [73] based task scheduling scheme is used for efficient task execution, effective scheduling of offloaded tasks in multi-UAVs, reducing the transmission delay, and improves the QoS of the users.…”
Section: ) Load Balancing and Secrecy Capacitymentioning
confidence: 99%
“…Reinforcement Learning [17] can learn and optimize for UAV-assisted MEC without training data by interacting with MEC environment. Therefore, reinforcement learning [17] and Deep Reinforcement Learning (DRL) [18] methods are introduced resource allocation and UAV position optimization [19] to minimize energy consumption. DRL [20] employs deep neural networks to capture the complex states of UAV-assisted MEC and reinforcement learning to make decisions.…”
Section: Introductionmentioning
confidence: 99%