2018
DOI: 10.1007/978-3-030-01731-6_5
|View full text |Cite
|
Sign up to set email alerts
|

UAV Relay in VANETs Against Smart Jamming with Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
124
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 69 publications
(126 citation statements)
references
References 21 publications
0
124
0
2
Order By: Relevance
“…While the main purpose of utilizing city map or radio map is to learn the channel indirectly or directly, another useful technique is to learn and adapt to the environment by directly interacting with it, for which reinforcement learning emerges as a powerful tool [175]. Reinforcement learning has been used in UAV networks for various purposes, e.g., navigation [176], anti-jamming [177], and communication rate maximization [178]. Specifically, the authors in [176] applied the deep reinforcement learning (DRL) technique for autonomous UAV navigation in complex environment to guide the UAV flying from a given initial location to the destination, using only sensory information, such as the UAV's orientation angle and the distances to obstacles and the destination.…”
Section: F Uav-assisted Communication Via Intelligent Learningmentioning
confidence: 99%
“…While the main purpose of utilizing city map or radio map is to learn the channel indirectly or directly, another useful technique is to learn and adapt to the environment by directly interacting with it, for which reinforcement learning emerges as a powerful tool [175]. Reinforcement learning has been used in UAV networks for various purposes, e.g., navigation [176], anti-jamming [177], and communication rate maximization [178]. Specifically, the authors in [176] applied the deep reinforcement learning (DRL) technique for autonomous UAV navigation in complex environment to guide the UAV flying from a given initial location to the destination, using only sensory information, such as the UAV's orientation angle and the distances to obstacles and the destination.…”
Section: F Uav-assisted Communication Via Intelligent Learningmentioning
confidence: 99%
“…In this article, we have elaborated the bi-directional mission offloading framework in SAGIN, which makes full use of the complementary advantages from the space-air networks and ground 15 networks. The overall architecture of agile mission offloading, and the enabling network reconfiguration framework based on NFV and SFC, have been introduced and validated with a case study, which demonstrates the substantial performance gain in reliability and cost reduction.…”
Section: Discussionmentioning
confidence: 99%
“…To quickly achieve the optimal relay policy for the UAV, the DQL based on CNN is then adopted. The simulation results in [115] show that the proposed DQL scheme takes only 200 time slots to converge to the optimal policy, which is 83.3% less than that of the relay scheme based on Q-learning [116]. Moreover, the proposed DQL scheme reduces the BER of the user by 46.6% compared with the hill climbing-based UAV relay scheme [117].…”
Section: A Network Securitymentioning
confidence: 99%