2019
DOI: 10.1109/access.2019.2953326
|View full text |Cite
|
Sign up to set email alerts
|

Learn to Navigate: Cooperative Path Planning for Unmanned Surface Vehicles Using Deep Reinforcement Learning

Abstract: Unmanned surface vehicle (USV) has witnessed a rapid growth in the recent decade and has been applied in various practical applications in both military and civilian domains. USVs can either be deployed as a single unit or multiple vehicles in a fleet to conduct ocean missions. Central to the control of USV and USV formations, path planning is the key technology that ensures the navigation safety by generating collision free trajectories. Compared with conventional path planning algorithms, the deep reinforcem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(40 citation statements)
references
References 16 publications
0
32
0
Order By: Relevance
“…However, there are several challenges to be addressed [7], like multiagent credit assignment, global exploration, relative over-generalization, and scalability. It is remarkable, in the context of optimization of ASVs fleets, the contributions of [17], where a fleet meta-agent of three boat-like autonomous vehicles is trained using Deep Q-Learning (DQL) to perform swarmcooperative trajectories. The multi agent local trajectory optimization is addressed also in [18], where the DRL goal is to optimize the policies of 3-5 agents to reach several final positions trough static obstacles.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, there are several challenges to be addressed [7], like multiagent credit assignment, global exploration, relative over-generalization, and scalability. It is remarkable, in the context of optimization of ASVs fleets, the contributions of [17], where a fleet meta-agent of three boat-like autonomous vehicles is trained using Deep Q-Learning (DQL) to perform swarmcooperative trajectories. The multi agent local trajectory optimization is addressed also in [18], where the DRL goal is to optimize the policies of 3-5 agents to reach several final positions trough static obstacles.…”
Section: Related Workmentioning
confidence: 99%
“…This dimension problem becomes unfeasible with a large number of agents, limiting the scalability of such methods as they are unable to deal with changes in the fleet size. This is the case in [17], where the fleet size is fixed and the action space is small (|A | = 27) and more vehicles will explode the scale of the problem. Some researches try to deal with the drawbacks of both methodologies by designing combined methodologies between the pure independent approach and a centralized learning like [14], [15].…”
Section: Related Workmentioning
confidence: 99%
“…The working mode of multi-robot cooperation brings more challenges to the inter-individual motion planning in the group. How to carry out the cooperative motion planning effectively becomes the unique feature of this field, which is different from the single robot motion planning.The architecture of reinforcement learning motion planning system for multimobile robots can be mainly divided into two categories: centralized [129] and distributed [130] . Centralized reinforcement learning takes the common task of multiple robots as the training goal, and there is a centralized computing unit that can obtain the state and sensor information of all robots, and the centralized computing unit is responsible for the centralized strategy training and distribution.…”
Section: Multi-robot Cooperative Planningmentioning
confidence: 99%
“…Stability of the closed loop system was achieved by the use of an additional supervisory line in the control law. Finally, in [24], the authors present the application of deep reinforcement learning algorithms for mobile robots and formation path planning with a specific focus on reliable obstacle avoidance in constrained maritime environments. The designed RL path planning algorithm is able to solve other complex issues such as the compliance with vehicle motion constraints.…”
Section: Introductionmentioning
confidence: 99%