Proceedings of 1995 American Control Conference - ACC'95
DOI: 10.1109/acc.1995.529368
|View full text |Cite
|
Sign up to set email alerts
|

Stable adaptive neural control of nonlinear systems

Abstract: Based on the Lyapunov synthesis approach, several adaptive neural control schemes have been developed during the last few years. So far, these schemes have been applied only to simple classes of nonlinear systems. This paper develops a design methodology that expands the class of nonlinear systems that adaptive neural control schemes can be applied to and, also, relaxes some of the restrictive assumptions that are usually made. One such assumption is the requirement of a known bound on the network reconstructi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…Moreover, the introduction of the term g 1 (x 1 )z 1 does not add any new variable in the function h 2 (Z 2 ), which makes the existing NN approximation results still valid in this paper. Therefore, employing the RBF NN to approximate h 2 (Z 2 ), we have 2 is the approximation error and satisfies | 2 | ≤ * 2 , and W * 2 is an unknown ideal weight vector. LetŴ 2 be the estimate of W * 2 , andW 2 =Ŵ 2 − W * 2 be the estimate error.…”
Section: Iss-modular Direct Ancmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, the introduction of the term g 1 (x 1 )z 1 does not add any new variable in the function h 2 (Z 2 ), which makes the existing NN approximation results still valid in this paper. Therefore, employing the RBF NN to approximate h 2 (Z 2 ), we have 2 is the approximation error and satisfies | 2 | ≤ * 2 , and W * 2 is an unknown ideal weight vector. LetŴ 2 be the estimate of W * 2 , andW 2 =Ŵ 2 − W * 2 be the estimate error.…”
Section: Iss-modular Direct Ancmentioning
confidence: 99%
“…Theorem 4: Consider the closed-loop system consisting of the strict-feedback system (1), the reference model (2), and the neural learning controller (34) with the virtual control laws (35) and the learned neural weights given by (23). For initial conditions x di (0) of the reference model which generates the same recurrent orbit ϕ d i (Z d i (0)) as in Theorem 3, and with the corresponding initial conditions x i (0) of (1) in a close vicinity of ϕ d i (Z d i (0)), we have that all signals in the closedloop system remain bounded, and the tracking error converges exponentially to a small neighborhood around zero.…”
Section: Learning Control Using Experiencesmentioning
confidence: 99%
“…The initial condition is [ (25) have to be shifted simultaneously. The control gains are k z1 = 10.0, k z2 = 10.0,, and the remaining numerical parameters are identical to those in Example 1.…”
Section: Simulationmentioning
confidence: 99%
“…Theorem 3: (Learning) Consider the closed-loop system consisting of the strict-feedback plant (1) with the unknown affine terms , the reference model (2), the controller (13), and the NN weight updating laws (17). For any recurrent orbit ϕ d (x d (0)), and with initial conditions x(0) ∈ Ω 0 (where Ω 0 is an appropriately chosen compact set) and W i (0) = 0, we have that the neural-weight estimates W i converge to small neighborhoods of optimal values W * i , the locally-accurate approximation of controller dynamics…”
Section: Learning From Iss-modular Direct Ancmentioning
confidence: 99%