IJCNN-91-Seattle International Joint Conference on Neural Networks
DOI: 10.1109/ijcnn.1991.155276
|View full text |Cite
|
Sign up to set email alerts
|

Decoupled extended Kalman filter training of feedforward layered networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
101
0
1

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 170 publications
(102 citation statements)
references
References 6 publications
0
101
0
1
Order By: Relevance
“…This choice differs from that made in [4] and [11], where the constraints imposed by the use of neural network are not particularly taken into account, and where the diagonal terms of P 0 are chosen of the order of 100, r being also set to value close to 1.…”
Section: Proposed Algorithm For the Training Of Feedforward Neural Momentioning
confidence: 98%
See 4 more Smart Citations
“…This choice differs from that made in [4] and [11], where the constraints imposed by the use of neural network are not particularly taken into account, and where the diagonal terms of P 0 are chosen of the order of 100, r being also set to value close to 1.…”
Section: Proposed Algorithm For the Training Of Feedforward Neural Momentioning
confidence: 98%
“…for which the weights must converge to constant values, they should not be allowed to drift. Nevertheless, a diagonal matrix Q with positive components is sometimes added in the update equation for P(k) both to avoid numerical instability, and to help to avoid local minima [11] [12]. But the adequate order of magnitude of the elements of Q which simultaneously makes the weights converge to constant values seems to be highly problem-dependent.…”
Section: Remarkmentioning
confidence: 99%
See 3 more Smart Citations