2022
DOI: 10.1109/lra.2022.3178791
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Robot Collision Avoidance With Self-State-Attention and Sensor Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…While the laser range data provide information about the distance of obstacles, they do not provide spatial information near the robot, which is implicit in the image data of an red, green, blue (RGB) or red, green, blue, depth map (RGBD) camera. Therefore, overcoming the sim‐to‐real gap in image data [ 33 ] or fusing laser range data with image data [ 34 ] is a promising research topic in DRL‐based path planning methods. In addition, the training effect improved using LSTM is considerable; thus, the combination of the BOAE mechanism and recurrent neural networks is worth exploring.…”
Section: Discussionmentioning
confidence: 99%
“…While the laser range data provide information about the distance of obstacles, they do not provide spatial information near the robot, which is implicit in the image data of an red, green, blue (RGB) or red, green, blue, depth map (RGBD) camera. Therefore, overcoming the sim‐to‐real gap in image data [ 33 ] or fusing laser range data with image data [ 34 ] is a promising research topic in DRL‐based path planning methods. In addition, the training effect improved using LSTM is considerable; thus, the combination of the BOAE mechanism and recurrent neural networks is worth exploring.…”
Section: Discussionmentioning
confidence: 99%
“…In research [113], a straightforward yet efficient deep reinforcement learning (DRL) approach featuring our self-state-attention unit is introduced. A solution that enables the navigation of a tall mobile robot, measuring one meter in height, using low-cost devices such as a 2D LiDAR sensor and a monocular camera is proposed.…”
Section: Senor Fusionmentioning
confidence: 99%
“…Tai et al [33] developed a navigation learning model in a simulated environment using a 10-dimensional laser beam as one input to the model. Han et al [34] used the fusion of RGB images from a camera and 2D LiDAR sensor data as input to a DRL network of self-state attention to investigate the effect of using 2D LiDAR on a tall robot. In their work, the training environment is captured and processed before passing it to the training network.…”
Section: Related Workmentioning
confidence: 99%