2019
DOI: 10.1109/tvt.2019.2935450
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks

Abstract: The rapid advancement of Artificial Intelligence (AI) has introduced Deep Neural Network (DNN)-based tasks to the ecosystem of vehicular networks. These tasks are often computation-intensive, requiring substantial computation resources, which are beyond the capability of a single vehicle. To address this challenge, Vehicular Edge Computing (VEC) has emerged as a solution, offering computing services for DNN-based tasks through resource pooling via Vehicle-to-Vehicle/Infrastructure (V2V/V2I) communications. In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
150
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 441 publications
(151 citation statements)
references
References 54 publications
0
150
0
1
Order By: Relevance
“…Experimental results show that the proposed algorithm performs better than the MUMTO algorithm in [177] in terms of overall cost, energy consumption, and delay. Liu et al [144] formulated offloading and resource allocation in VECNs as a semi-Markov process, by considering stochastic vehicle traffic, dynamic computation requests, and time-varying communication conditions. Two RL methods, i.e., a Q-learning based method and a DRL method, are designed to get the optimal policies for computation offloading and resource allocation.…”
Section: ) Lbs For Joint Issuesmentioning
confidence: 99%
“…Experimental results show that the proposed algorithm performs better than the MUMTO algorithm in [177] in terms of overall cost, energy consumption, and delay. Liu et al [144] formulated offloading and resource allocation in VECNs as a semi-Markov process, by considering stochastic vehicle traffic, dynamic computation requests, and time-varying communication conditions. Two RL methods, i.e., a Q-learning based method and a DRL method, are designed to get the optimal policies for computation offloading and resource allocation.…”
Section: ) Lbs For Joint Issuesmentioning
confidence: 99%
“…With the growth of cloud computing applications, mobile edge computing (MEC) has seen broad application prospects [28]. In the area of MEC task offloading, Liu et al [29] considered the dynamic changes of user equipment (UE) location and service requests and proposed a DRL-based task scheduling scheme. Large numbers of vehicles are used as mobile edge servers to provide computing services to nearby equipment, effectively solving the task scheduling problem in a changing environment.…”
Section: A Research On Job Schedulingmentioning
confidence: 99%
“…Nevertheless, by exploiting deep neural networks (DNNs) for function approximation, deep reinforcement learning (DRL) has been demonstrated to be able to efficiently approximate Q-values of RL [22] and more scalable. There have been some attempts to adopt DRL in the design of online resource allocation and scheduling for computation offloading in MEC [23][24][25][26][27]. Specifically, in [23], system sum cost of a multi-user network is minimized in terms of execution delay and energy consumption by computational resource allocation.…”
Section: Introductionmentioning
confidence: 99%
“…Besides, double deep Q-network (DQN)-based strategic computation offloading algorithm was proposed in [26], where an mobile device learned the optimal task offloading and energy allocation to maximize the long-term utility based on the task queue state, the energy queue state as well as the channel qualities. What is more, a DQN-based vehicle-assisted offloading scheme is studied in [27] to maximize the long-term utility of the vehicle edge computing network by considering the delay of the computation task. Overall, existing works on DRL-based dynamic computation offloading only consider centralized algorithms for either single user cases or multi-user scenarios.…”
Section: Introductionmentioning
confidence: 99%