2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC) 2018
DOI: 10.1109/gncc42960.2018.9018854
|View full text |Cite
|
Sign up to set email alerts
|

Neural Q Learning Algorithm based UAV Obstacle Avoidance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…Navigation may be challenging for the UAV during missions since it may not be aware of the precise environment and obstacle information. Therefore, Benchun Zhou et al helped the UAV avoid obstacles by studying the neural Q-learning method [2]. They assumed that the environment in which the UAV was located was unknown, and when there were obstacles within the execution range, In order to address the quality issue with the Q-table, maintain the route planning process, and take into account the UAV's physical limitations, they integrated BP neural networks to Q-learning.…”
Section: Uav Obstacle Avoidance Based On Q-learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Navigation may be challenging for the UAV during missions since it may not be aware of the precise environment and obstacle information. Therefore, Benchun Zhou et al helped the UAV avoid obstacles by studying the neural Q-learning method [2]. They assumed that the environment in which the UAV was located was unknown, and when there were obstacles within the execution range, In order to address the quality issue with the Q-table, maintain the route planning process, and take into account the UAV's physical limitations, they integrated BP neural networks to Q-learning.…”
Section: Uav Obstacle Avoidance Based On Q-learningmentioning
confidence: 99%
“…𝑄(𝑠, π‘Ž) = 𝑅(𝑠, π‘Ž) + 𝛾 βˆ‘ 𝑝(𝑠, π‘Ž, 𝑠 β€² ) βˆ‘ 𝑄(𝑠 β€² , π‘Ž β€² ) π‘Ž β€² ∈𝐴 𝑠 β€² βˆˆπ‘† (2) Highlights in Science, Engineering and Technology…”
Section: Mobile Robot Path Planning Based On Q-learningmentioning
confidence: 99%
“…In recent years, reinforcement learning (RL) based approaches have been widely investigated in the UAV navigation domain [46][47][48][49][50][51][52][53][54]. The classic Q-learning (CQL) algorithm proposed in [55] has the underlying principle that when the UAV observes the environment information at time step k and takes actions based on the environment information obtained, an immediate reward can be obtained from the environment.…”
Section: Related Workmentioning
confidence: 99%
“…A neural Q learning (NQL) based approach is proposed in [51]. The authors combine the CQL with the Back Propagation Neural Network (BPN) to obtain the resulting NQL, which can be trained to achieve the obstacle avoidance purpose.…”
Section: Related Workmentioning
confidence: 99%
“…Ref. [15] using Q-learning employs adaptive random exploration to achieve UAV navigation and obstacle avoidance, but the method is constrained by action space and sample dimensions. Addressing the issue of UAVs' inability to acquire environmental information, ref.…”
Section: Introductionmentioning
confidence: 99%