2021
DOI: 10.1109/lcsys.2020.3001241
|View full text |Cite
|
Sign up to set email alerts
|

H∞ Tracking Control for Linear Discrete-Time Systems: Model-Free Q-Learning Designs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 49 publications
(42 citation statements)
references
References 16 publications
1
41
0
Order By: Relevance
“…The Stackelberg game‐based optimal control and the worst‐case disturbance are derived by solving the corresponding GARE. The results show that the controller designed in this article can achieve a lower L2$$ {L}_2 $$ disturbance attenuation level compared with the ones in References 35‐39 because there is a feedforward part in the controller. Moreover, it is rigorously proved that the disturbance attenuation condition holds under the proposed controller. The proposed Q‐learning algorithm is model‐free and uses only input–output data.…”
Section: Introductionmentioning
confidence: 86%
See 4 more Smart Citations
“…The Stackelberg game‐based optimal control and the worst‐case disturbance are derived by solving the corresponding GARE. The results show that the controller designed in this article can achieve a lower L2$$ {L}_2 $$ disturbance attenuation level compared with the ones in References 35‐39 because there is a feedforward part in the controller. Moreover, it is rigorously proved that the disturbance attenuation condition holds under the proposed controller. The proposed Q‐learning algorithm is model‐free and uses only input–output data.…”
Section: Introductionmentioning
confidence: 86%
“…Therefore, this article mainly focuses on using reinforcement learning to solve the H$$ {H}_{\infty } $$ tracking problem. There have been many studies developed for H$$ {H}_{\infty } $$ control 30‐34 and H$$ {H}_{\infty } $$ tracking problems 35‐39 . For the H$$ {H}_{\infty } $$ control by using reinforcement learning, a fundamental work is in Reference 33, where a model‐free Q‐learning is designed for discrete‐time zero‐sum games.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations