1998
DOI: 10.1016/s0925-2312(98)00021-6
|View full text |Cite
|
Sign up to set email alerts
|

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models

Abstract: To cite this version:Isabelle Rivals, Léon Personnaz. A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Neurocomputing, Elsevier, 1998, 20 (1-3), pp.279-294. In Neurocomputing (1-3): 279-294 (1998 AbstractThe Extended Kalman Filter (EKF) is a well known tool for the recursive parameter estimation of static and dynamic nonlinear models. In particular, the EKF has been applied to the estimation of the weights of feedforward and recurrent ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(16 citation statements)
references
References 8 publications
0
16
0
Order By: Relevance
“…A Kalman filter attempts to estimate the state of a system that can be modeled as a linear system driven by additive white Gaussian noise and where the measurements available are linear combinations of the system states corrupted by additive white Gaussian noise (Li et al, 2002). For neural network training the weights of the networks are the states process of state equation the Kalman filter attempts to estimate and the desired output of the network is the measurement (measurement equation) used by the Kalman filter as shown in the equations below (Rivals and Personnaz, 1998):…”
Section: Cascade Correlation Artificial Neural Network (Ccann)mentioning
confidence: 99%
“…A Kalman filter attempts to estimate the state of a system that can be modeled as a linear system driven by additive white Gaussian noise and where the measurements available are linear combinations of the system states corrupted by additive white Gaussian noise (Li et al, 2002). For neural network training the weights of the networks are the states process of state equation the Kalman filter attempts to estimate and the desired output of the network is the measurement (measurement equation) used by the Kalman filter as shown in the equations below (Rivals and Personnaz, 1998):…”
Section: Cascade Correlation Artificial Neural Network (Ccann)mentioning
confidence: 99%
“…This learning rule is acceptable for forecasting the type of problems where the number of inputs is not too large (Grewal and Andrews, 2001). A Kalman filter (Brown and Hwang, 1992;Rivals and Personnaz, 1998;Demuth and Beale, 2001;Grewal and Andrews, 2001;Li et al, 2002) attempts to estimate the state of a system that can be modeled as a linear system driven by additive white Gaussian noise and where the available measurements are linear combinations of the system states that are corrupted by additive white Gaussian noise (Li et al, 2002). A detailed description of the Cascade Correlation architecture where Kalman's learning rule is embedded can be found in Diamantopoulou (2010).…”
Section: Study Area and Meteorological Data Setsmentioning
confidence: 99%
“…This value is used to initialize the diagonal value of the error covariance matrix for the Kalman estimates of the weights. Additionally, the work of Rivals and Personnaz (1998) analyzes the cases of the different initial values for the Kalman Filter training algorithm and provides advice on how to choose the initial values of the system error covariance and process noise error in the Kalman recursion.…”
Section: Study Area and Meteorological Data Setsmentioning
confidence: 99%
“…Furthermore, it reduces the dimensions of the state vector. Rivals and Personnaz (1998) investigated various conditions of the initial weights for the EKF training algorithm. Van der Merwe (2004) used the unscented Kalman filter (UKF) to train neural networks and showed that UKF is much more stable and has a faster convergent rate than that of the EKF algorithm (Zhao et al, 2012).…”
Section: Introductionmentioning
confidence: 99%