1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation
DOI: 10.1109/icsmc.1997.635142
|View full text |Cite
|
Sign up to set email alerts
|

A new training algorithm for the general regression neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(35 citation statements)
references
References 1 publication
0
35
0
Order By: Relevance
“…The DE convergence plots often exhibit a characteristic staircase behavior, with intervals where the cost does not decrease from generation to generation. To mitigate this drawback somewhat, we have implemented an ad hoc hybrid method, which combines DE with an occasional deterministic descent along the negative gradient of the cost function, with the gradients computed by the finite-difference method [34,35]. The descent is triggered with a probability of 0.5 each time the best cost does not decrease between consecutive generations.…”
Section: Numerical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The DE convergence plots often exhibit a characteristic staircase behavior, with intervals where the cost does not decrease from generation to generation. To mitigate this drawback somewhat, we have implemented an ad hoc hybrid method, which combines DE with an occasional deterministic descent along the negative gradient of the cost function, with the gradients computed by the finite-difference method [34,35]. The descent is triggered with a probability of 0.5 each time the best cost does not decrease between consecutive generations.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…It is perhaps worthwhile to point out some differences between the DE and traditional, real-coded genetic algorithms (GAs) [33,34,39]. First, DE does not involve selection of parents based on fitness.…”
Section: Appendix B Differential Evolutionmentioning
confidence: 99%
“…When WN is applied in the real word, for example, the morphology variations of ECG waveforms are different for different patients, and even for the same patient or for the same type (Osowski & Linh, 2001), traditional networks can become a bottleneck requiring retraining with new features added into the current database. PNN (Specht et al, 1988) and general regression neural networks (GRNN) (Masters & Land, 1997;Seng et al, 2002) have been presented, and are recognized as having expandable or reducible network structure, fast learning speed, and promising results. In these adaptation methods, the choice of smoothing parameter has significant effects on the network outcome, and the choice of parameter is usually based on the overall statistical calculation from pre-collected training data.…”
Section: Introductionmentioning
confidence: 99%
“…Advantages of these approaches are shown in (Storn & Price, 1997). Using DE for training neural networks was first introduced in (Masters & Land, 1997). It was reported that the DE algorithm is particularly suitable for training general regression neural networks (GRNN), and it outperforms other training methods such as gradient and Hessian on applications which have the presence of multiple local minima in the error space.…”
Section: Introductionmentioning
confidence: 99%