1991
DOI: 10.1162/neco.1991.3.4.579
|View full text |Cite
|
Sign up to set email alerts
|

Improving the Generalization Properties of Radial Basis Function Neural Networks

Abstract: An important feature of radial basis function neural networks is the existence of a fast, linear learning algorithm in a network capable of representing complex nonlinear mappings. Satisfactory generalization in these networks requires that the network mapping be sufficiently smooth. We show that a modification to the error functional allows smoothing to be introduced explicitly without significantly affecting the speed of training. A simple example is used to demonstrate the resulting improvement in the gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
108
0
2

Year Published

1994
1994
2012
2012

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 291 publications
(111 citation statements)
references
References 2 publications
1
108
0
2
Order By: Relevance
“…The specific network architecture that will be investigated in this thesis is a hybrid between the Factorized Radial Basis Function Networks and the fuzzy/neural networks for implementing fuzzy controllers capable of learning from a reinforcement signal [78][79][80][81][82][83]. The activation function used in an F-RBFN with n input units is As the adopted activation functions are continuous, which are derivable in the whole domain, it is to apply the classical error gradient descent technique in order to finely tune the weights and the parameters in the activation functions.…”
Section: Factorized Rbfn (F-rbfn)mentioning
confidence: 99%
“…The specific network architecture that will be investigated in this thesis is a hybrid between the Factorized Radial Basis Function Networks and the fuzzy/neural networks for implementing fuzzy controllers capable of learning from a reinforcement signal [78][79][80][81][82][83]. The activation function used in an F-RBFN with n input units is As the adopted activation functions are continuous, which are derivable in the whole domain, it is to apply the classical error gradient descent technique in order to finely tune the weights and the parameters in the activation functions.…”
Section: Factorized Rbfn (F-rbfn)mentioning
confidence: 99%
“…How to choose a good value of 1 has been addressed in the statistical literature (Hoerl and (Bishop 1991) has suggested that the performance of the RBF network may be fairly insensitive to the precise value of l . An elegant approach to the selection of the regularization parameter is to adopt a Bayesian interpretation and to calculate the best value of regularization parameter using the evidence procedure (Mackay 1992).…”
Section: Choice Of Regularization Parametermentioning
confidence: 99%
“…Another interesting alternative to MLPs are Radial Basis Function Neural Networks (RBFNNs). RBFNNs can be considered a local approximation procedure, and the improvement in both its approximation ability as well as in the construction of its architecture has been note-worthy [10]. RBFNNs have been used in the most varied domains, from function approximation to pattern classification, time series prediction, data mining, signals processing, and nonlinear system modeling and control, but again there are very few works testing this model in bankruptcy or crisis prediction.…”
Section: Introductionmentioning
confidence: 99%