1999
DOI: 10.1109/72.737502
|View full text |Cite
|
Sign up to set email alerts
|

On the Kalman filtering method in neural network training and pruning

Abstract: Abstract-In the use of extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems on how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition will be presented with a simple example illustrated. Then based on three assumptions-1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is clos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2002
2002
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 72 publications
(26 citation statements)
references
References 18 publications
0
26
0
Order By: Relevance
“…As will be pointed out in the next paragraph, this idea is also useful for assessing the relative importance (saliency) of the network parameters, which may be pruned accordingly [17].…”
Section: Algorithm Descriptionmentioning
confidence: 99%
See 2 more Smart Citations
“…As will be pointed out in the next paragraph, this idea is also useful for assessing the relative importance (saliency) of the network parameters, which may be pruned accordingly [17].…”
Section: Algorithm Descriptionmentioning
confidence: 99%
“…(7). Denoting P ∞ = lim k→∞ P[k] and the covariance matrix of the process noise Q = qI; the incremental change of the approximation error due to removing the kth element of the (parameters) state-vector W is given by [17] …”
Section: Two-spirals Classiÿcation Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…Many researchers suggested that the Levenberg-Marquardt (LM) method outperforms BP, CG, and Quasi-Newton methods [79,80]. Several other methods were proposed for the FNNs optimization are based on Kalman-filter [81,82] and recursive least squares method [83].…”
Section: Conventional Optimization Approachesmentioning
confidence: 99%
“…Various derivative-based methods have been used to train neural networks, including gradient descent [3], Kalman filtering [4,5], and backpropagation [9], etc. Gradient descent training of RBF networks has proven to be much more effective than more conventional methods [3].However; gradient descent training can be computationally expensive.…”
Section: Introductionmentioning
confidence: 99%