2023
DOI: 10.3390/app13148174
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Obstacle Avoidance and Path Planning through Reinforcement Learning

Abstract: The use of reinforcement learning (RL) for dynamic obstacle avoidance (DOA) algorithms and path planning (PP) has become increasingly popular in recent years. Despite the importance of RL in this growing technological era, few studies have systematically reviewed this research concept. Therefore, this study provides a comprehensive review of the literature on dynamic reinforcement learning-based path planning and obstacle avoidance. Furthermore, this research reviews publications from the last 5 years (2018–20… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(4 citation statements)
references
References 61 publications
0
4
0
Order By: Relevance
“…The dynamic and unforeseeable state of edge cases in the real world makes the application of the navigation task challenging. Ensuring that the systems can detect and respond effectively to changing and unstructured scenarios is essential for safe and reliable navigation [ 32 ]. The conventional approach, which works best in a static environment, is known to be computationally intensive and must be adjusted to varying environment states and motion dynamics.…”
Section: Learning-based Navigation Techniques (Methods)mentioning
confidence: 99%
See 1 more Smart Citation
“…The dynamic and unforeseeable state of edge cases in the real world makes the application of the navigation task challenging. Ensuring that the systems can detect and respond effectively to changing and unstructured scenarios is essential for safe and reliable navigation [ 32 ]. The conventional approach, which works best in a static environment, is known to be computationally intensive and must be adjusted to varying environment states and motion dynamics.…”
Section: Learning-based Navigation Techniques (Methods)mentioning
confidence: 99%
“…Shabbir et al [ 31 ] reviewed the capabilities of deep learning through environmental perception and modelling for an efficient navigation experience. Most models integrate Q-learning techniques to solve navigation task challenges through path planning and obstacle avoidance using discreet actions [ 32 , 33 ]. The authors in [ 34 ] used a reward function and continuous action space to achieve safe navigation tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Significant advances have been made in LiDAR SLAM, which can often provide more robust and simultaneous localization and mapping in indoor navigation systems using 3D spatial information directly captured by LiDAR point clouds, and have been employed in robots and automated guided vehicles for industrial applications [15]. The autonomous obstacle avoidance and trajectory planning control strategy with low computational complexity, high cost-effectiveness, closed-loop stability verification, and the ability to quickly plan a collision-free smooth trajectory curve has been used in the overall control system of autonomous mobile robots [16][17][18][19][20].…”
Section: Literature Backgroundmentioning
confidence: 99%
“…Nevertheless, the research acknowledges the need for additional validation in different realistic scenarios and highlights potential challenges associated with the integration of neural network models into robotic systems. Reinforcement learning (RL)-based techniques for dynamic obstacle avoidance and path planning are widely used [17]. Nonetheless, limited validation under diverse scenarios may limit the generality of these findings.…”
Section: Introductionmentioning
confidence: 99%