2023
DOI: 10.3390/app13116847
|View full text |Cite
|
Sign up to set email alerts
|

Path following for Autonomous Ground Vehicle Using DDPG Algorithm: A Reinforcement Learning Approach

Abstract: The potential of autonomous driving technology to revolutionize the transportation industry has attracted significant attention. Path following, a fundamental task in autonomous driving, involves accurately and safely guiding a vehicle along a specified path. Conventional path-following methods often rely on rule-based or parameter-tuning aspects, which may not be adaptable to complex and dynamic scenarios. Reinforcement learning (RL) has emerged as a promising approach that can learn effective control policie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…The DDPG RL algorithm has demonstrated its potential in the field of control by outperforming traditional path control methods when applied to vehicles following predefined paths [ 33 ]. The DDPG algorithm employs the actor–critic network framework to update the actor and critic models [ 34 ].…”
Section: Lks Methods Based On Ddpgmentioning
confidence: 99%
“…The DDPG RL algorithm has demonstrated its potential in the field of control by outperforming traditional path control methods when applied to vehicles following predefined paths [ 33 ]. The DDPG algorithm employs the actor–critic network framework to update the actor and critic models [ 34 ].…”
Section: Lks Methods Based On Ddpgmentioning
confidence: 99%
“…Ensuring the adaptability of the policy to various challenges is crucial, as it cultivates the ability to handle generalized scenarios, thereby reducing the risk of overfitting to specific paths. We employed a stochastic path generation algorithm proposed in our previous work [ 35 ], randomly generating a reference path for the robot to follow at the beginning of each episode. In this paper, the parameters of the stochastic path generation algorithm is defined with , m, and m. Straight paths with a generation probability of are also introduced into the training.…”
Section: Design and Implementationmentioning
confidence: 99%