2022
DOI: 10.1155/2022/5433988
|View full text |Cite
|
Sign up to set email alerts
|

Path Planning Method of Mobile Robot Using Improved Deep Reinforcement Learning

Abstract: A mobile robot path planning method based on improved deep reinforcement learning is proposed. First, in order to conform to the actual kinematics model of the robot, the continuous environmental state space and discrete action state space are designed. In addition, an improved deep Q-network (DQN) method is proposed, which takes the directly collected information as the training samples and combines the environmental state characteristics of the robot and the target point to be reached as the input of the net… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…In multiple experiments, the results of the DQN algorithm exhibited higher stability and consistency, while the results of the non-reinforcement learning algorithms showed significant fluctuations. This suggests that the DQN algorithm has better stability and reliability in handling the intelligent agent path selection problem [14] .The experimental results are shown in Figures 5 and and 6. In conclusion, the DQN algorithm exhibits clear superiority in the intelligent agent path selection problem, with higher computational efficiency, stronger learning capabilities and generalization abilities, as well as better stability and reliability.…”
Section: Analysis Of Two Methodsmentioning
confidence: 88%
See 1 more Smart Citation
“…In multiple experiments, the results of the DQN algorithm exhibited higher stability and consistency, while the results of the non-reinforcement learning algorithms showed significant fluctuations. This suggests that the DQN algorithm has better stability and reliability in handling the intelligent agent path selection problem [14] .The experimental results are shown in Figures 5 and and 6. In conclusion, the DQN algorithm exhibits clear superiority in the intelligent agent path selection problem, with higher computational efficiency, stronger learning capabilities and generalization abilities, as well as better stability and reliability.…”
Section: Analysis Of Two Methodsmentioning
confidence: 88%
“…While these methods can yield satisfactory results, they require extensive manual design and adjustment and are unable to adapt to complex environmental changes. In contrast, reinforcement learning algorithms can adapt to different environments through autonomous learning, demonstrating superior generalization capabilities and adaptability [14] .…”
Section: Application Of Reinforcement Learning Methodsmentioning
confidence: 99%
“…Shen, H et al [20] developed a DQN based approach for automatic collision avoidance of multiple ships incorporating ship maneuverability, human experience and navigation rules. Wang, W et al [21] integrated COLREGs into the DRL algorithm and trained over multiple ships in rich encountering situations. These studies show that the application of DQN in path planning is promising, especially for the autonomous navigation of ships.…”
Section: Literature Review and Motivationmentioning
confidence: 99%
“…In 2016, Tai et al [ 21 ] first applied the DQN algorithm to indoor mobile robots, which could complete path planning tasks in indoor scenarios, but the algorithm had low generalization. Wang et al [ 22 ] introduced an improved DQN algorithm combined with artificial potential field methods to design reward functions, improving the efficiency of mobile robot path planning. However, it could not achieve continuous action output for robots.…”
Section: Related Workmentioning
confidence: 99%