2020 39th Chinese Control Conference (CCC) 2020
DOI: 10.23919/ccc50068.2020.9188426
|View full text |Cite
|
Sign up to set email alerts
|

A PID Gain Adjustment Scheme Based on Reinforcement Learning Algorithm for a Quadrotor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…The state space vector X describes the position of the quadcopter in space and its linear and angular velocities as follows [5][6][7][8][17][18][19][20]:…”
Section: Quadcopter State Space Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The state space vector X describes the position of the quadcopter in space and its linear and angular velocities as follows [5][6][7][8][17][18][19][20]:…”
Section: Quadcopter State Space Modelmentioning
confidence: 99%
“…The quadcopter's orientation is represented by the three Euler angles: φ represents the roll angle around the x-axis, θ represents the pitch angle around the y-axis, and ψ represents the yaw angle around the z-axis [5][6][7]. In order to obtain the state space representation of the quadcopter, the following equations that describe the translational and rotational motion of the quadcopter are used [17][18][19][20]:…”
Section: Quadcopter State Space Modelmentioning
confidence: 99%
“…Combining it with the traditional PID algorithm can improve the generalization and adaptability of a PID controller. Zheng proposed a PID controller based on reinforcement learning to enhance the trajectory tracking performance in a quadrotor [29]. Combining the proximal policy optimization (PPO) algorithm with the traditional PID controller, this method has improved response time, reduced overshoot, minimized control errors, enhanced stability, and strong anti-interference capabilities compared with the traditional PID controller.…”
Section: Introductionmentioning
confidence: 99%
“…Through training and testing in the RotorS-Gazebo environment, it was proved that the tracking performance of the method was better than that of the NLGL (Nonlinear Navigation Logic) method. Zheng Q et al [27] used the PPO algorithm of reinforcement learning to adjust the PID controller gain, and achieved good stability of the aircraft in control, anti-jamming and flying height. In addition, Zhen Y et al [28,29] proposed a hybrid DDPG (Mi-DDPG) algorithm.…”
Section: Introductionmentioning
confidence: 99%