2001
DOI: 10.1016/s0893-6080(01)00027-2
|View full text |Cite
|
Sign up to set email alerts
|

Three learning phases for radial-basis-function networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

1
210
0
3

Year Published

2009
2009
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 456 publications
(214 citation statements)
references
References 32 publications
1
210
0
3
Order By: Relevance
“…It then propagates this error backwards through the network, and updates the weights in each layer through gradient descent. Although backpropagation is usually associated with artificial neural networks that use sigmoid or hyperbolic activation functions, the algorithm is also applicable to radial basis function networks (Poggio and Girosi, 1990;Schwenker et al, 2001a).…”
Section: Algorithm: Backpropagationmentioning
confidence: 99%
See 1 more Smart Citation
“…It then propagates this error backwards through the network, and updates the weights in each layer through gradient descent. Although backpropagation is usually associated with artificial neural networks that use sigmoid or hyperbolic activation functions, the algorithm is also applicable to radial basis function networks (Poggio and Girosi, 1990;Schwenker et al, 2001a).…”
Section: Algorithm: Backpropagationmentioning
confidence: 99%
“…A complementary analysis comes from Widrow et al (2013), who examine under which conditions training a network benefits from backpropagation or not. Schwenker et al (2001a) use a combination of both least squares batch learning and incremental backpropagation.…”
Section: Backpropagation For Slfnsmentioning
confidence: 99%
“…The final transformation should be able to extract desired information form this image. If the final transformation is linear Y = k+1 X = T k+1 ( k X; k W) parameters k W are either determined in an iterative procedure simultaneously with all other parameters W from previous transformations (as in the backpropagation algorithms [36]), or sequentially determined by calculating the pseudoinverse transformation, as is frequently practiced in the two-phase RBF learning [91]. Simultaneous adaptation of all parameters (RBF centers, scaling parameters, output layer weights) in experiments on more demanding data gives better results.…”
Section: Transformation-based Meta-learningmentioning
confidence: 99%
“…This is basically equivalent to random initialization of feedforward neural networks with linear transfer functions only. Such methods are used to start a twophase RBF learning [91]. For simple data random projections work rather well [97], but one should always check results of linear discrimination in the original feature space, as it may not be significantly worse.…”
Section: Transformation-based Meta-learningmentioning
confidence: 99%
“…In this case the method for linear SVM training could be applied. The Gaussian radial basis function is one of this kernels, so the support vector learning could be applied as learning method for radial basis function network with support vectors being the centers of radial basis functions [7,8]. Paper [9] provides the idea of the extension of the support vector machine algorithm to the multiclass classification problem with usage of different weights for different outputs and selection of the class which produces the maximum value.…”
mentioning
confidence: 99%