2021 IEEE/CIC International Conference on Communications in China (ICCC) 2021
DOI: 10.1109/iccc52777.2021.9580313
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Task Offloading in Vehicular Edge Computing Networks Based on Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…To deliver better services to nearby users and achieve seamless real-time responses to mobile user service requests, Wang S proposed an offline microservice coordination algorithm based on dynamic programming [7]. Shuai suggested an adaptive network routing technique to find a balance between modeling accuracy and optimization efficiency in MEC systems with time-varying connection delays [8]. In heterogeneous vehicle networks with many random jobs, time-varying radio channels, and dynamic bandwidth, Ke proposed an adaptive depth-reinforcement-based computational offloading approach [9].…”
Section: Related Workmentioning
confidence: 99%
“…To deliver better services to nearby users and achieve seamless real-time responses to mobile user service requests, Wang S proposed an offline microservice coordination algorithm based on dynamic programming [7]. Shuai suggested an adaptive network routing technique to find a balance between modeling accuracy and optimization efficiency in MEC systems with time-varying connection delays [8]. In heterogeneous vehicle networks with many random jobs, time-varying radio channels, and dynamic bandwidth, Ke proposed an adaptive depth-reinforcement-based computational offloading approach [9].…”
Section: Related Workmentioning
confidence: 99%
“…RL/DRL has dramatically anchored state-of-the-art performance in various vehicular environments, particularly the task offloading [3]. Considering the dynamic nature of the vehicular environment, Shuai et al [19] provided a delay optimization scheme. Initially, the transmission latency is minimized using the optimal flow-based routing algorithm, then a deep Q-learning based task offloading strategy selection scheme is used for adaptive task offloading considering the MEC load states.…”
Section: Related Workmentioning
confidence: 99%
“…The action value function q π ps, aq is the expected accumulated reward after a chooses π on s, given in (18). Both functions exhibit how decent a state and state-action pair is, their relationship is given in (19).…”
Section: A Drl and The Ppomentioning
confidence: 99%