2020
DOI: 10.3390/s20174836
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning

Abstract: It is challenging to avoid obstacles safely and efficiently for multiple robots of different shapes in distributed and communication-free scenarios, where robots do not communicate with each other and only sense other robots’ positions and obstacles around them. Most existing multi-robot collision avoidance systems either require communication between robots or require expensive movement data of other robots, like velocities, accelerations and paths. In this paper, we propose a map-based deep reinforcement lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(23 citation statements)
references
References 53 publications
0
23
0
Order By: Relevance
“…at is, the robot learns from the experience of pedestrian movement, understands the crowded scene, and encodes the human-computer interaction characteristics into the navigation strategy. e results in this field [12,20,21] have confirmed the superior performance of deep reinforcement learning in crowd perception navigation. ese deep reinforcement learning methods first collect relevant data from the surrounding people to build a value network, train in a reinforcement learning framework, and finally map the information to the control commands of the robot.…”
Section: Deep Reinforcement Learning Methodsmentioning
confidence: 66%
“…at is, the robot learns from the experience of pedestrian movement, understands the crowded scene, and encodes the human-computer interaction characteristics into the navigation strategy. e results in this field [12,20,21] have confirmed the superior performance of deep reinforcement learning in crowd perception navigation. ese deep reinforcement learning methods first collect relevant data from the surrounding people to build a value network, train in a reinforcement learning framework, and finally map the information to the control commands of the robot.…”
Section: Deep Reinforcement Learning Methodsmentioning
confidence: 66%
“…(1) Expansion of network input Considering that the agent cannot uniquely distinguish its state based on current observations, the simplest solution is to add several previous observation frames as network inputs to improve its ability to distinguish among states [36,51,72,75,76] . In addition, previous rewards and actions also contain state information, so some studies have input previous rewards and actions to the network [33,44,63,77] .…”
Section: Solutionmentioning
confidence: 99%
“…This technique has been applied in DRL-based navigation task. Chen et al [76] introduced a two-stage training process for curriculum learning. In the first stage, they trained the policy in a random scenario with eight robots; and in the second stage, they trained the policy in both random and circular scenarios with 16 robots.…”
Section: Solutionmentioning
confidence: 99%
“…Researchers [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ] have studied the decentralized multi-robot collision avoidance algorithm and some fruitful results have been achieved, such as collision avoidance with deep reinforcement learning (CADRL) [ 1 ], socially aware CADRL (SA-CADRL) [ 2 ], reciprocal velocity obstacle (RVO) [ 5 ]. The methods mentioned above were designed for cluttered workspaces.…”
Section: Introductionmentioning
confidence: 99%