2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461096
|View full text |Cite
|
Sign up to set email alerts
|

PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-Based Planning

Abstract: We present PRM-RL, a hierarchical method for long-range navigation task completion that combines samplingbased path planning with reinforcement learning (RL). The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology. Next, the sampling-based planners provide roadmaps which connect robot configurations that can be successfully navigated by the RL agent. The same RL agents are used to control the robot under… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
171
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 274 publications
(173 citation statements)
references
References 28 publications
1
171
0
1
Order By: Relevance
“…Overall, we show improved performance, better roadmap generation, and easier on-robot transfer, including a relative success rate increase of 40% over [21], and 94% over [11], while maintaining good performance despite increasing noise. We also show that only adding edges when agents can always navigate them makes roadmaps cheaper to build and improves navigation success; denser roadmaps also have higher simulated success rates but at substantial roadmap construction cost.…”
Section: Introductionmentioning
confidence: 79%
See 2 more Smart Citations
“…Overall, we show improved performance, better roadmap generation, and easier on-robot transfer, including a relative success rate increase of 40% over [21], and 94% over [11], while maintaining good performance despite increasing noise. We also show that only adding edges when agents can always navigate them makes roadmaps cheaper to build and improves navigation success; denser roadmaps also have higher simulated success rates but at substantial roadmap construction cost.…”
Section: Introductionmentioning
confidence: 79%
“…In other words, PRM-RL is a tool for generating paths which an RL agent can reliably satisfy without violating the constraints of its task. In [21] we demonstrated PRM-RL's success on tasks with constraints, but in this work, we focus solely on the navigation task, which collapses the task predicate L(x) to remaining within C f ree , and collapses the full configuration space available to the robot to a task space T limited to the robot's position and orientation.…”
Section: Problem Statementmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, it promising to combine these approaches and merge the advantages of both. Faust et al [15] use a reinforcement learning agent to learn short-range, point-to-point navigation policies for 2D and 3D action spaces which capture the robot dynamic and task constraint without considering the large-scale topology. Sampling-based planning is used to plan waypoints which give the planning a long-range goal-directed behavior.…”
Section: Related Workmentioning
confidence: 99%
“…A bad reward function may lead to local minima or may lead an agent to wander around forever. To overcome this issue in long-range navigation tasks [20] presents a hybrid approach that combines sampling based path planning with RL. [14], [15] adds a separate pre-step supervised learning(SL) phase to the RL approach to teach agents how to reach their goal location without caring about colliding with other agents.…”
Section: Introductionmentioning
confidence: 99%