2021
DOI: 10.1177/1729881421992621
|View full text |Cite
|
Sign up to set email alerts
|

Deep reinforcement learning for map-less goal-driven robot navigation

Abstract: Mobile robots that operate in real-world environments need to be able to safely navigate their surroundings. Obstacle avoidance and path planning are crucial capabilities for achieving autonomy of such systems. However, for new or dynamic environments, navigation methods that rely on an explicit map of the environment can be impractical or even impossible to use. We present a new local navigation method for steering the robot to global goals without relying on an explicit map of the environment. The proposed n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(14 citation statements)
references
References 20 publications
0
14
0
Order By: Relevance
“…Tai et al [7] merged range findings, position relative to the target and previous velocity as input and complete collision-free navigation with DDPG. Matej et al [8] grouped the range scanner into 30 bins as input and trained the agent with A2C [14] and recurrent long short-term memory [15] (LSTM) neural network. In this paper, we propose an effective target-driven framework leveraging multiple information as input for collision-free mapless navigation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Tai et al [7] merged range findings, position relative to the target and previous velocity as input and complete collision-free navigation with DDPG. Matej et al [8] grouped the range scanner into 30 bins as input and trained the agent with A2C [14] and recurrent long short-term memory [15] (LSTM) neural network. In this paper, we propose an effective target-driven framework leveraging multiple information as input for collision-free mapless navigation.…”
Section: Related Workmentioning
confidence: 99%
“…With the rapid progress of deep reinforcement learning, it makes mapless navigation feasible. In the past few years, many scholars have implemented vision-based [4,5,6] or ranging-based [7,8] mapless navigation.…”
Section: Introductionmentioning
confidence: 99%
“…Deep Reinforcement Learning (DRL) has been widely used in complex control problems under uncertainties. Dobrevksy et al [23] proposed a navigation model based on the advantage actor-critic method, which can directly project robot observations to the motion commands. Risk-aware navigation that learns to perform conservative actions when there are high chances of collision and moves faster otherwise has been addressed by Kahn et al [21].…”
Section: Learning-based Navigationmentioning
confidence: 99%
“…DRL has spurred a lot of significant breakthroughs in many applications, such as manipulator controlling [18], autonomous driving [19,20], and games [21,22]. Considering the strengths of DRL, researchers attempt to apply it to tackle the PointGoal navigation problem [4,[23][24][25][26]. Compared with the previous traditional methods, the DRL-based methods avoid extensive hand-engineering but only need to learn the complete navigation system directly from the data.…”
Section: Introductionmentioning
confidence: 99%
“…In summary, approaches to solve PointGoal navigation can broadly be classified into two categories: (1) traditional navigation methods that decompose navigation into localization, mapping and planning [12][13][14][15] or (2) learning neural policies using DRL [4,[23][24][25][26][27][28][29]35]. Traditional navigation methods have the weakness of intensive computational demand and involve numerous parameters that need to be tuned manually.…”
Section: Introductionmentioning
confidence: 99%