2022
DOI: 10.1088/1742-6596/2294/1/012034
|View full text |Cite
|
Sign up to set email alerts
|

A simple implement of Q-learning in robot path planning

Abstract: This paper firstly gives an introduction of robot path planning problem, which includes the brief definition of path planning, some representative methods and previous applications of Q-learning. Secondly, the paper compares some typical methods, like Breadth First Search and Depth-First-Search, A* and deep learning, with corresponding pseudo codes in detail. Their advantages and disadvantages are also listed in this part. Thirdly, we carry out a simple simulation experiment by applying Q-learning method. The … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…Lieping Zhang [23] proposed a Self-Adaptive Reinforcement-Exploration Q-learning (SARE-Q) method in various grid state space environments and compared the results to those of previous CQL and Self-Adaptive Q-learning (SA-Q). Conghao Jin et al [24] and Haoran Gao et al [25] both successfully implemented CQL for path planning in grid state space simulation environments, and both agents were able to learn how to find the desired path. However, their results were not compared to any other path planning methods.…”
Section: Related Workmentioning
confidence: 99%
“…Lieping Zhang [23] proposed a Self-Adaptive Reinforcement-Exploration Q-learning (SARE-Q) method in various grid state space environments and compared the results to those of previous CQL and Self-Adaptive Q-learning (SA-Q). Conghao Jin et al [24] and Haoran Gao et al [25] both successfully implemented CQL for path planning in grid state space simulation environments, and both agents were able to learn how to find the desired path. However, their results were not compared to any other path planning methods.…”
Section: Related Workmentioning
confidence: 99%