International Joint Conference on Neural Networks 1989
DOI: 10.1109/ijcnn.1989.118343
|View full text |Cite
|
Sign up to set email alerts
|

Generalising the nodes of the error propagation network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
30
0

Year Published

1992
1992
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(30 citation statements)
references
References 0 publications
0
30
0
Order By: Relevance
“…For instance, the viability of two RBF layers in cascade (replacing the linear associator between hidden and output layers of an RBFN with another Euclidean/ Gaussian layer) has been shown (Robinson et al, 1988;Dorffner, 1992). Also, the combination of MLP and RBF units in one Downloaded by [University of California, San Diego] at 12:31 29 June 2016 hidden layer can lead to improved results (Weymaere, 1992).…”
Section: Previous Attempts On Unificationmentioning
confidence: 96%
See 1 more Smart Citation
“…For instance, the viability of two RBF layers in cascade (replacing the linear associator between hidden and output layers of an RBFN with another Euclidean/ Gaussian layer) has been shown (Robinson et al, 1988;Dorffner, 1992). Also, the combination of MLP and RBF units in one Downloaded by [University of California, San Diego] at 12:31 29 June 2016 hidden layer can lead to improved results (Weymaere, 1992).…”
Section: Previous Attempts On Unificationmentioning
confidence: 96%
“…Important examples of such cross-fertilizations are the use of gradient descent for RBFNs (Robinson et al, 1988;Weymaere, 1992;Dorffner, 1992) and the use of initialization + delta for MLPs (Smyth 1992;Weymaere, 1992). The former is a viable way of fine-tuning the centers and/or widths of RBFs, and the latter has proved to improve speed (Smyth, 1992) or even performance (Dorffner and Porenta, 1994) of MLPs (explained in more detail later).…”
Section: Previous Attempts On Unificationmentioning
confidence: 99%
“…Momentum is the addition of a smoothing term to the weight updating equation in order to reinforce weight changes that occur in a consistent direct.ion through weight space. The other is lea.rning ra.te a,dapta.tion of t.he style utilized in [5]. If t,he t~otal squared error for all pat.terns in the training set decreased from one epoch to the next, then the learning rate is increased by a fixed factor (here 0.1%).…”
Section: Neural Network Modelsmentioning
confidence: 99%
“…On the other hand, the third type uses generalized error propagation [5], a generalization of back propagation, t.0 adjust the parameters of the Gaussian hidden units.…”
Section: Neural Network Modelsmentioning
confidence: 99%
See 1 more Smart Citation