2005
DOI: 10.1109/tnn.2004.841786
|View full text |Cite
|
Sign up to set email alerts
|

Direct Adaptive Controller for Nonaffine Nonlinear Systems Using Self-Structuring Neural Networks

Abstract: A direct adaptive state-feedback controller is proposed for highly nonlinear systems. We consider uncertain or ill-defined nonaffine nonlinear systems and employ a neural network (NN) with flexible structure, i.e., an online variation of the number of neurons. The NN approximates and adaptively cancels an unknown plant nonlinearity. A control law and adaptive laws for the weights in the hidden layer and output layer of the NN are established so that the whole closed-loop system is stable in the sense of Lyapun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
162
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 240 publications
(163 citation statements)
references
References 21 publications
1
162
0
Order By: Relevance
“…The final sequence numbers of subsystems for approximating COD were 1, 28, 9, 6, 18, 7, 21, 13, 23. The final sequence numbers of subsystems for approximating NH 3 -N were 13,21,18,24,5,8. Different quality parameter are needed to active different subsystems.…”
Section: Soft-sensing Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…The final sequence numbers of subsystems for approximating COD were 1, 28, 9, 6, 18, 7, 21, 13, 23. The final sequence numbers of subsystems for approximating NH 3 -N were 13,21,18,24,5,8. Different quality parameter are needed to active different subsystems.…”
Section: Soft-sensing Problemmentioning
confidence: 99%
“…The local recurrent global feedforward models proposed in [3,[24][25][26] can easily be equipped with self-organizing structures. A self-structuring neural network control method was proposed in [24] which keeps the dynamics of the network when the structure changed. A way to combine training and pruning for the construction of a recurrent radial basis function network (RRBFN) based on recursive least square (RLS) learning was discussed in [27].…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, several approaches were proposed to solve this problem [11].These are divided into two categories direct [13] and indirect scheme [12] are used to obtain an ideal control law. Based on implicit function theorem, the first category uses either a fuzzy logic system or a neural network to estimate the ideal control action directly.…”
Section: Introductionmentioning
confidence: 99%
“…These learning phases not only decide the structure of neural network but also adjust the parameters of neural network. Recently, some self-structuring neural networks have been applied to solve several control problems (Lin et al, 2001;Gao & Er, 2003;Park et al, 2005). Lin et al (2001) used a similarity measure method to avoid the newly generated membership function being too similar to the existing ones; however, the structure would grow large as the input data has large variations.…”
Section: Introductionmentioning
confidence: 99%
“…Gao & Er (2003) proposed an error reduction ratio with QR decomposition to prune the hidden neurons; however, the design procedure is overly complex. Park et al (2005) proposed a self-structuring neural network which can create new hidden neurons to increase the learning ability; unfortunately, the proposed approach can not avoid the structure of neural network growing unboundedly. This paper proposes a recurrent-neural-network-based adaptive control (RNNAC) method, which combines neural-network-based adaptive control, robust control and self-structuring approach, for a class of unknown nonlinear systems.…”
Section: Introductionmentioning
confidence: 99%