2023
DOI: 10.3390/jmse11122245
|View full text |Cite
|
Sign up to set email alerts
|

Research on Method of Collision Avoidance Planning for UUV Based on Deep Reinforcement Learning

Wei Gao,
Mengxue Han,
Zhao Wang
et al.

Abstract: A UUV can perform tasks such as underwater surveillance, reconnaissance, surveillance, and tracking by being equipped with sensors and different task modules. Due to the complex underwater environment, the UUV must have good collision avoidance planning algorithms to avoid various underwater obstacles when performing tasks. The existing path planning algorithms take a long time to plan and have poor adaptability to the environment. Some collision-avoidance planning algorithms do not take into account the kinem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…However, AUVs still require an obstacle avoidance algorithm to navigate around potential obstacles along the planned trajectory. Several solutions are available for AUV DOA, including rapidly exploring random trees (RRTs) [1], fuzzy logic [2], the neural network (NN), reinforcement learning (RL), deep reinforcement learning (DRL) [3], and the artificial potential field (APF) [4][5][6]. The RRT algorithm exhibits a robust capability for detecting unknown obstacles and is well-suited for addressing obstacle avoidance issues in high-dimensional environments; however, its real-time performance is relatively poor [7,8].…”
Section: Literature Review 121 Dynamic Obstacle Avoidancementioning
confidence: 99%
“…However, AUVs still require an obstacle avoidance algorithm to navigate around potential obstacles along the planned trajectory. Several solutions are available for AUV DOA, including rapidly exploring random trees (RRTs) [1], fuzzy logic [2], the neural network (NN), reinforcement learning (RL), deep reinforcement learning (DRL) [3], and the artificial potential field (APF) [4][5][6]. The RRT algorithm exhibits a robust capability for detecting unknown obstacles and is well-suited for addressing obstacle avoidance issues in high-dimensional environments; however, its real-time performance is relatively poor [7,8].…”
Section: Literature Review 121 Dynamic Obstacle Avoidancementioning
confidence: 99%