2022
DOI: 10.3390/app12146874
|View full text |Cite
|
Sign up to set email alerts
|

Path-Following and Obstacle Avoidance Control of Nonholonomic Wheeled Mobile Robot Based on Deep Reinforcement Learning

Abstract: In this paper, a novel path-following and obstacle avoidance control method is given for nonholonomic wheeled mobile robots (NWMRs), based on deep reinforcement learning. The model for path-following is investigated first, and then applied to the proposed reinforcement learning control strategy. The proposed control method can achieve path-following control through interacting with the environment of the set path. The path-following control method is mainly based on the design of the state and reward function … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 32 publications
0
10
0
Order By: Relevance
“…In this case, the training data is generated while practicing, and the behavior of the vehicle is continually adjusted [ 4 ]. Thus, in [ 27 ], obstacle avoidance is learned by a UGV while performing 2D path tracking. In [ 28 ], an unmanned aerial vehicle is trained with Gazebo to fly among obstacles with a 2D LiDAR.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this case, the training data is generated while practicing, and the behavior of the vehicle is continually adjusted [ 4 ]. Thus, in [ 27 ], obstacle avoidance is learned by a UGV while performing 2D path tracking. In [ 28 ], an unmanned aerial vehicle is trained with Gazebo to fly among obstacles with a 2D LiDAR.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, it is relevant to mention that most of the previously cited papers about RL adopt an Actor–Critic scheme [ 9 , 27 , 28 , 29 , 31 , 33 , 35 , 36 , 37 , 38 ].…”
Section: Related Workmentioning
confidence: 99%
“…Rios et al [17] used a recursive higher-order neural network algorithm based on extended Kalman filtering and applied the inverse optimal control to a tracked chassis system. Cheng et al [18] used a reinforcement learning algorithm to interact with the environment of a set path to achieve path-following in a non-autonomous wheeled mobile robot (NWMR). Saha et al [19] controlled the robot to accomplish path following based on a deep neural network (DNN.)…”
Section: Introductionmentioning
confidence: 99%
“…It is difficult to fine-tune an agent trained in a simulation environment through experimentation; thus, a correction constant is employed for stable performance of the agent's output in their experiment. Cheng et al [16] accomplished path following and collision avoidance for a nonholonomic wheeled mobile robot in simulation, but the trained agent exerted excessive control effort, resulting in high jerks in robot velocities. More recently, Zheng et al [17] propose a 3D path-following control method for powered parafoils, utilizing a combination of linear active disturbance rejection control and DDPG, to effectively control the parafoils' flight trajectory and counter wind disturbances.…”
Section: Introductionmentioning
confidence: 99%