2022
DOI: 10.1109/tnnls.2021.3054378
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Learning From Adaptive Neural Control for Discrete-Time Strict-Feedback Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
32
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 31 publications
(32 citation statements)
references
References 54 publications
0
32
0
Order By: Relevance
“…Simulation studies are implemented to verify the validity of the proposed control scheme. In the future, the dynamic learning 5,52,60 and the optimization problem [16][17][18]29,58,65 of discrete-time nonlinear systems, and the event-triggered consensus of discrete-time multi-agent systems 44 will be our major concern.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Simulation studies are implemented to verify the validity of the proposed control scheme. In the future, the dynamic learning 5,52,60 and the optimization problem [16][17][18]29,58,65 of discrete-time nonlinear systems, and the event-triggered consensus of discrete-time multi-agent systems 44 will be our major concern.…”
Section: Discussionmentioning
confidence: 99%
“…58,59 Subsequently, a dynamic learning scheme has been proposed to achieve knowledge acquirement, storage and reusage of unknown dynamics for discrete-time NSMMU with the aid of an extended exponential stability of linear time-varying systems. 60 These results mentioned above depend on the sufficient network bandwidth. To remove this restriction and satisfy the practical requirement, an event-triggering threshold compensation strategy has been proposed for discrete-time NSMMU embedded by the controller-to-actuator network.…”
Section: Introductionmentioning
confidence: 97%
“…17 This theory has successfully proven that radial basis function neural networks (RBF NNs) are capable of satisfying the PE conditions along the recurrent trajectories and the neural weights can converge to the ideal values, thereby guaranteeing the learning ability of NNs. More recently, the deterministic learning theory was not only extended to some more general nonlinear systems including strict-feedback and pure-feedback, 18,19 but also employed in some practical systems including spacecraft and marine surface vessels. 20 Noting that most existing adaptive and learning control strategies do not investigate the transient and steady-state tracking performance of the considered system, which is limited in some applications since many tracking performance indicators including convergence rate, overshoot, and steady-state error are required to meet specified constraints in some engineering systems.…”
Section: Introductionmentioning
confidence: 99%
“…Using the deterministic learning theory, the neural learning control (NLC) method has been extended to some continuous‐time nonlinear systems such as SFSs, 24‐26 pure‐feedback systems 27 and has also been expanded to many practical systems 28‐30 . For discrete‐time SFSs, to avoid the non‐causality problem, a NLC scheme is constructed by using the n$$ n $$‐step predictor in work, 31 and the convergence of neural weights is verified by an extended exponential stability corollary of a class of linear time‐varying systems with delays. By combining the n$$ n $$‐step input‐output predictor and the implicit function theorem, the NLC problem has also been discussed in work 32 for discrete‐time pure‐feedback systems.…”
Section: Introductionmentioning
confidence: 99%
“…By combining the n$$ n $$‐step input‐output predictor and the implicit function theorem, the NLC problem has also been discussed in work 32 for discrete‐time pure‐feedback systems. However, the result in works 31,32 shows that the neural weights based on the n$$ n $$‐step delay neural update law will converge to n$$ n $$ different values, which brings many challenges to the construction of the neural learning controller. For example, the neural learning controller needs to switch constant neural weights according to the time sequence, which will cause system state chattering and may crash the control system.…”
Section: Introductionmentioning
confidence: 99%