2022
DOI: 10.1109/tnnls.2021.3051030
|View full text |Cite
|
Sign up to set email alerts
|

Observer-Based Neuro-Adaptive Optimized Control of Strict-Feedback Nonlinear Systems With State Constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
223
0
1

Year Published

2022
2022
2022
2022

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 495 publications
(225 citation statements)
references
References 50 publications
1
223
0
1
Order By: Relevance
“…Finally, the validity and applicability of the presented adaptive resilient control scheme are verified by the presented simulation results. Inspired by the results in References 51 and 52, the adaptive optimal secure control against cyberattacks for nonlinear systems will be one of our future research work.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, the validity and applicability of the presented adaptive resilient control scheme are verified by the presented simulation results. Inspired by the results in References 51 and 52, the adaptive optimal secure control against cyberattacks for nonlinear systems will be one of our future research work.…”
Section: Discussionmentioning
confidence: 99%
“…However, its provided stability proof compromises the attractive model‐free feature of RL since a mathematical form of dynamics is required to present the rigorous system stability analysis. Even though the required explicit knowledge of dynamics could be avoided by using add‐on techniques such as NNs, 10‐12 fuzzy models, 13 Gaussian process (GP), 14 or observers, 15 the accompanying identification processes further increase computational complexity and parameter tuning efforts. This motivates us to develop a novel, computationally simple RL‐based control strategy, which exhibits both a model‐free feature and a provable system stability guarantee, to accomplish the robust optimal stabilization of continuous‐time nonlinear systems.…”
Section: Introductionmentioning
confidence: 99%
“…Note that a famous structure motivated by reinforcement learning (named as the actor-critic structure) has been developed to guarantee the implementation of developed ADP approaches, where the critic network and the action network are, respectively, employed to approximate the cost function and the ideal control sequence. [9][10][11] It is worth mentioning that the corresponding control is actually near-optimal control due to the approximation errors of the ADP structure.…”
Section: Introductionmentioning
confidence: 99%