2022
DOI: 10.1016/j.automatica.2021.110103
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive optimal output regulation of linear discrete-time systems based on event-triggered output-feedback

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 41 publications
0
15
0
Order By: Relevance
“…To the best of our knowledge, there has been extensive studies on the event-triggered mechanism. [27][28][29][30][31][32][33][34][35][36][37][38] In Reference 27, Zhong et al introduced a classical triggering condition into the ADP algorithm, for addressing the optimal regulation problem for unknown nonlinear continuous-time plants. In Reference 28, an event-based self-learning controller was investigated for unknown nonlinear plants and the detailed stability proof was given.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To the best of our knowledge, there has been extensive studies on the event-triggered mechanism. [27][28][29][30][31][32][33][34][35][36][37][38] In Reference 27, Zhong et al introduced a classical triggering condition into the ADP algorithm, for addressing the optimal regulation problem for unknown nonlinear continuous-time plants. In Reference 28, an event-based self-learning controller was investigated for unknown nonlinear plants and the detailed stability proof was given.…”
Section: Introductionmentioning
confidence: 99%
“…As an advanced sampling approach, the essence of the event‐triggered mechanism is to decide the controller update by choosing an appropriate triggering condition, which achieves the purpose of saving energy. To the best of our knowledge, there has been extensive studies on the event‐triggered mechanism 27‐38 . In Reference 27, Zhong et al introduced a classical triggering condition into the ADP algorithm, for addressing the optimal regulation problem for unknown nonlinear continuous‐time plants.…”
Section: Introductionmentioning
confidence: 99%
“…12,13 As stated in the survey, 14 ADP algorithms in terms of iteration can be divided into two categories: value iteration 15,16 and policy iteration. 17,18 So far, a large number of results based on ADP have been obtained to solve various control problems, such as optimal control with constrained control inputs, [19][20][21] optimal tracking control, [22][23][24] networked control, 25 robust control, 26,27 and event-triggered control, [28][29][30] which strongly show the applicability and great potential of ADP algorithms. In Reference 16, the convergence of the adaptive critic algorithm was proven and the algorithm procedure was given.…”
Section: Introductionmentioning
confidence: 99%
“…Due to its universal approximation and self‐learning, RL has become a popular tool in dealing with optimal control problem 20‐22 . Up to now, many highlighted results have been published to develop the OORP 23‐31 . For example, the linear OORP was addressed in References 23 and 24, and its discrete version was discussed in Reference 25.…”
Section: Introductionmentioning
confidence: 99%
“…Up to now, many highlighted results have been published to develop the OORP 23‐31 . For example, the linear OORP was addressed in References 23 and 24, and its discrete version was discussed in Reference 25. In Reference 26, the RL‐based nonlinear OORP was discussed, where the feedforward design approach and RL techniques were integrated for the first time.…”
Section: Introductionmentioning
confidence: 99%