2022
DOI: 10.3390/s22103847
|View full text |Cite
|
Sign up to set email alerts
|

RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments

Abstract: Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we present a novel planner (reinforcement learning dynamic object velocity space, RL-DOVS) based on an RL technique for dynamic environments. The method explicitly considers the robot kinodynamic constraints for selec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…We test and compare the use of the complete integrated system with the S-DOVS planner on its own in different maps, our proposed waypoint generator with others of the state of the art and extensively evaluate the system in both simulation and the real world. Note that we don't compare the S-DOVS planner with other local reactive planners such as RL-DOVS [22], as developing a local planner is not the goal of the paper, and any other local reactive planner could be used in the system instead of S-DOVS. Common reactive planners, by themselves, are limited to handling only a few meters of navigation within a single room in the real world.…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We test and compare the use of the complete integrated system with the S-DOVS planner on its own in different maps, our proposed waypoint generator with others of the state of the art and extensively evaluate the system in both simulation and the real world. Note that we don't compare the S-DOVS planner with other local reactive planners such as RL-DOVS [22], as developing a local planner is not the goal of the paper, and any other local reactive planner could be used in the system instead of S-DOVS. Common reactive planners, by themselves, are limited to handling only a few meters of navigation within a single room in the real world.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…In recent years, motion planning in dynamic environments has been tackled by learning approaches such as reinforcement learning, as demonstrated in various works, includ-ing [19][20][21], as well as by [22,23], which also incorporated the DOVS model. However, these methods face limitations due to the complexity of the real world and the large number of variables involved.…”
Section: Related Workmentioning
confidence: 99%