1989
DOI: 10.1162/neco.1989.1.2.281
|View full text |Cite
|
Sign up to set email alerts
|

Fast Learning in Networks of Locally-Tuned Processing Units

Abstract: We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988). We consider training such networks in a completely supervised manner, but abandon this approach in favor of a more computationally efficient hybrid learning method which combines self-organized and supervised learning. Our networks learn faster than backpropagation for two reasons: the local representations ens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

5
1,421
1
32

Year Published

1999
1999
2013
2013

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 3,752 publications
(1,459 citation statements)
references
References 2 publications
5
1,421
1
32
Order By: Relevance
“…The aim of this paper is to report the application of these DRBFN and IRBFN methods in solving DEs. In contrast to the approach t a k en by other authors as reviewed above, in the present methods the width of the ith neuron (centre) a (i) is determined according to the following simple relation (Moody and Darken, 1989) a (i) = d (i) (2) where is a factor, > 0, and d (i) is the distance from the ith centre to the nearest centre. Relation (2) indicates that it is reasonable to assign a larger width where the centres are widely separated from each other and a smaller width where the centres are closer.…”
mentioning
confidence: 99%
“…The aim of this paper is to report the application of these DRBFN and IRBFN methods in solving DEs. In contrast to the approach t a k en by other authors as reviewed above, in the present methods the width of the ith neuron (centre) a (i) is determined according to the following simple relation (Moody and Darken, 1989) a (i) = d (i) (2) where is a factor, > 0, and d (i) is the distance from the ith centre to the nearest centre. Relation (2) indicates that it is reasonable to assign a larger width where the centres are widely separated from each other and a smaller width where the centres are closer.…”
mentioning
confidence: 99%
“…The way in which the network is used for data modeling is different when approximating time-series and in pattern recognition. In pattern classification applications the most used radial activated function is the Gaussian [9], [10]. The Gaussian's centers influence the performance of the RBF network.…”
Section: A Classifiers 1) Feed-forward Neural Networkmentioning
confidence: 99%
“…To avoid these situations, they suggested the use of a clustering algorithm to position the centers. The RBF used in this study is combined with the K-means clustering algorithm [9] for initialization of class centers.…”
Section: A Classifiers 1) Feed-forward Neural Networkmentioning
confidence: 99%
“…It has also been used for determining initial center values for the subsequent supervised training of radial basis function networks (Moody & Darken, 1989). The K-means algorithm finds K vectors m i i 1; …; K given N data points {x n }.…”
Section: K-means Algorithmmentioning
confidence: 99%