2023
DOI: 10.3390/act12080326
|View full text |Cite
|
Sign up to set email alerts
|

A Self-Adaptive Double Q-Backstepping Trajectory Tracking Control Approach Based on Reinforcement Learning for Mobile Robots

Abstract: When a mobile robot inspects tasks with complex requirements indoors, the traditional backstepping method cannot guarantee the accuracy of the trajectory, leading to problems such as the instrument not being inside the image and focus failure when the robot grabs the image with high zoom. In order to solve this problem, this paper proposes an adaptive backstepping method based on double Q-learning for tracking and controlling the trajectory of mobile robots. We design the incremental model-free algorithm of Do… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 38 publications
0
2
0
Order By: Relevance
“…The controller output is determined by summing these three terms, with each multiplied by its respective tuning constant. The PID controller equation can be expressed mathematically as follows [42]:…”
Section: Pid Control Techniquementioning
confidence: 99%
“…The controller output is determined by summing these three terms, with each multiplied by its respective tuning constant. The PID controller equation can be expressed mathematically as follows [42]:…”
Section: Pid Control Techniquementioning
confidence: 99%
“…At present, the flight control methods for drones mainly include linear control methods [8][9][10], nonlinear control methods [11][12][13], and intelligent control methods [14][15][16][17]. The learning-based robot control method has received widespread attention in the field of automatic control [18][19][20], as it ignores the dynamic model of the robot and learns control methods through a large amount of motion data. In the latest Nature journal, Elia Kaufmann et al [21], from the Robotics and Perception Group at the University of Zurich, studied the unmanned aerial vehicle autonomous system, Swift, which used deep reinforcement learning (DRL) algorithms to successfully defeat human champions in racing competitions, setting new competition records.…”
Section: Introductionmentioning
confidence: 99%