2019
DOI: 10.1109/tnnls.2018.2821668
|View full text |Cite
|
Sign up to set email alerts
|

Neuro-Adaptive Control With Given Performance Specifications for Strict Feedback Systems Under Full-State Constraints

Abstract: In this paper, we investigate the tracking control problem for a class of strict feedback systems with pregiven performance specifications as well as full-state constraints. Our focus is on developing a feasible neural network (NN)-based control method that is able to, under full-state constraints, force the tracking error to converge into a prescribed region within preset finite time and further reduce the error to a smaller and adjustable residual set, while confining the overshoot within predefined small le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 67 publications
(8 citation statements)
references
References 34 publications
0
8
0
Order By: Relevance
“…[49] The behavior-shaping function shown in (8) possesses the following properties: ; and for all ; , is a positive integer. …”
Section: Preliminaries and Problem Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…[49] The behavior-shaping function shown in (8) possesses the following properties: ; and for all ; , is a positive integer. …”
Section: Preliminaries and Problem Formulationmentioning
confidence: 99%
“…Inspired by neural adaptive PPC proposed in ref. [49], an asymmetric scaling function and a behavior-shaping function are introduced to constraint overshoot and steady-state errors, respectively. The unknown model dynamics of flexible robotic manipulator is approximated by the improved RBFNNs.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we will use radial basis function NN 39‐42 to approximate some (lumped) unknown nonlinear functions as long as the NN structure is sufficiently complex and the number of neurons is larger enough. According to universal approximation theorem, for any given continuous function Fifalse(Xifalse):RnR$$ {F}_i\left({X}_i\right):{R}^n\to R $$ on a compact set DiRn$$ {D}_i\subset {R}^n $$, there exists a NN such that Fifalse(Xifalse)$$ {F}_i\left({X}_i\right) $$ can be approximated with sufficient accuracy by choosing an ideal NN as follows: Fifalse(Xifalse)=WiTSifalse(Xifalse)+ξifalse(Xifalse),$$ {F}_i\left({X}_i\right)={W}_i^{\ast T}{S}_i\left({X}_i\right)+{\xi}_i\left({X}_i\right), $$ where Wi$$ {W}_i^{\ast } $$ denotes the ideal constant neural weight vector, XiRn$$ {X}_i\in {R}^n $$ is the NN input vector and ξifalse(Xifalse)$$ {\xi}_i\left({X}_i\right) $$ is the approximation error.…”
Section: System Description and Preliminariesmentioning
confidence: 99%
“…A large variety of previous works considers neural-network-based adaptive control (neuro-adaptive control) with stability guarantees, focusing on the optimal control problem [2,11,12,10,13,14,15,16,17,18,19,5]. Nevertheless, the related works draw motivation from the neural network density property (see, e.g., [20]) 2 and assume sufficiently small approximation errors and linear parameterizations of the unknown terms (dynamics, optimal controllers, or value functions), which is also the case with standard adaptive control methodologies [1,21,22,8].…”
Section: Related Workmentioning
confidence: 99%