2021
DOI: 10.1109/lra.2021.3068106
|View full text |Cite
|
Sign up to set email alerts
|

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning

Abstract: Visual navigation is essential for many applications in robotics, from manipulation, through mobile robotics to automated driving. Deep reinforcement learning (DRL) provides an elegant map-free approach integrating image processing, localization, and planning in one module, which can be trained and therefore optimized for a given environment. However, to date, DRL-based visual navigation was validated exclusively in simulation, where the simulator provides information that is not available in the real world, e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(19 citation statements)
references
References 33 publications
0
19
0
Order By: Relevance
“…In [ 28 ], an unmanned aerial vehicle is trained with Gazebo to fly among obstacles with a 2D LiDAR. For indoor navigation of UGVs, methods can be found where the main exteroceptive sensors are a 2D rangefinders [ 29 , 30 , 31 ], depth cameras with a limited field of view [ 32 ] or RGB cameras [ 9 , 33 ]. In [ 34 ], 2D virtual range data generated from a monocular camera is employed by a UGV as input for RL.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In [ 28 ], an unmanned aerial vehicle is trained with Gazebo to fly among obstacles with a 2D LiDAR. For indoor navigation of UGVs, methods can be found where the main exteroceptive sensors are a 2D rangefinders [ 29 , 30 , 31 ], depth cameras with a limited field of view [ 32 ] or RGB cameras [ 9 , 33 ]. In [ 34 ], 2D virtual range data generated from a monocular camera is employed by a UGV as input for RL.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, it is relevant to mention that most of the previously cited papers about RL adopt an Actor–Critic scheme [ 9 , 27 , 28 , 29 , 31 , 33 , 35 , 36 , 37 , 38 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The target-driven navigation approach is a new DRL-based end-to-end navigation approach, first proposed by Zhu et al [1] in 2017. This approach takes only the image of the current scene and the target object as input, and generates an action in the 3D environment as output [10]- [16]. Hence, when an agent takes its next action, the condition is current state and the target, not just its current state, so there is no need to retrain the model for the new target.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial techniques have also been developed for generating challenging environments [21], [22]. Prior work has also explored augmenting realworld training with large amounts of procedurally generated environments via domain adaptation techniques [23], transfer learning [24], or fine-tuning [25]. We highlight that none of the methods above provide guarantees on generalization to real-world environments.…”
Section: A Related Workmentioning
confidence: 99%