1994
DOI: 10.1109/72.279191
|View full text |Cite
|
Sign up to set email alerts
|

Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks

Abstract: Although the potential of the powerful mapping and representational capabilities of recurrent network architectures is generally recognized by the neural network research community, recurrent neural networks have not been widely used for the control of nonlinear dynamical systems, possibly due to the relative ineffectiveness of simple gradient descent training algorithms. Developments in the use of parameter-based extended Kalman filter algorithms for training recurrent networks may provide a mechanism by whic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
200
0
5

Year Published

1998
1998
2017
2017

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 469 publications
(205 citation statements)
references
References 20 publications
0
200
0
5
Order By: Relevance
“…LSTM generalized well though, requiring only the 30 shortest exemplars (n ≤ 10) of the CSL a n b n c n to correctly predict the possible continuations of sequence prefixes for n up to 1000 and more. A combination of a decoupled extended Kalman filter (Kalman, 1960;Williams, 1992b;Puskorius and Feldkamp, 1994;Feldkamp et al, 1998;Haykin, 2001;Feldkamp et al, 2003) and an LSTM RNN (Pérez-Ortiz et al, 2003) learned to deal correctly with values of n up to 10 million and more. That is, after training the network was able to read sequences of 30,000,000 symbols and more, one symbol at a time, and finally detect the subtle differences between legal strings such as a 10,000,000 b 10,000,000 c 10,000,000 and very similar but illegal strings such as a 10,000,000 b 9,999,999 c 10,000,000 .…”
Section: : Supervised Recurrent Very Deep Learner (Lstm Rnn)mentioning
confidence: 99%
“…LSTM generalized well though, requiring only the 30 shortest exemplars (n ≤ 10) of the CSL a n b n c n to correctly predict the possible continuations of sequence prefixes for n up to 1000 and more. A combination of a decoupled extended Kalman filter (Kalman, 1960;Williams, 1992b;Puskorius and Feldkamp, 1994;Feldkamp et al, 1998;Haykin, 2001;Feldkamp et al, 2003) and an LSTM RNN (Pérez-Ortiz et al, 2003) learned to deal correctly with values of n up to 10 million and more. That is, after training the network was able to read sequences of 30,000,000 symbols and more, one symbol at a time, and finally detect the subtle differences between legal strings such as a 10,000,000 b 10,000,000 c 10,000,000 and very similar but illegal strings such as a 10,000,000 b 9,999,999 c 10,000,000 .…”
Section: : Supervised Recurrent Very Deep Learner (Lstm Rnn)mentioning
confidence: 99%
“…The final sequence numbers of subsystems for approximating COD were 1, 28, 9, 6, 18, 7, 21, 13, 23. The final sequence numbers of subsystems for approximating NH 3 -N were 13,21,18,24,5,8. Different quality parameter are needed to active different subsystems.…”
Section: Soft-sensing Problemmentioning
confidence: 99%
“…The classical fully recurrent network [6,7] is composed of a single layer of fully interconnected neurons. several such recurrent layers are combined to obtain a richer architecture [8]. Other cases of recurrent networks are the external feedback representations [9], the higher-order recurrent neural networks [10], and the block-structured recurrent neural networks [11].…”
Section: Introductionmentioning
confidence: 99%
“…such that the initial output values of the hidden neurons are in the linear part of the tanh. We propose to initialize the weights with small random values, a choice which is also made in most EKF approaches (in [4], [11] and [12] for example).…”
Section: Proposed Algorithm For the Training Of Feedforward Neural Momentioning
confidence: 99%