1991
DOI: 10.1016/0893-6080(91)90005-p
|View full text |Cite
|
Sign up to set email alerts
|

A Gaussian potential function network with hierarchically self-organizing learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
78
0
3

Year Published

1993
1993
2014
2014

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 305 publications
(81 citation statements)
references
References 12 publications
0
78
0
3
Order By: Relevance
“…The algorithm is based on the idea that the number of hidden units should correspond to the complexity of the underlying function as re#ected in the observed data. Lee et al [10] developed hierarchically self-organizing learning (HSOL) in order to determine the optimal number of hidden units of their Gaussian function network. For the same purpose, Musavi et al [13] employed a method in which a large number of hidden nodes are merged whenever possible.…”
Section: Sequential Learning Using the Pg-rbf Networkmentioning
confidence: 99%
“…The algorithm is based on the idea that the number of hidden units should correspond to the complexity of the underlying function as re#ected in the observed data. Lee et al [10] developed hierarchically self-organizing learning (HSOL) in order to determine the optimal number of hidden units of their Gaussian function network. For the same purpose, Musavi et al [13] employed a method in which a large number of hidden nodes are merged whenever possible.…”
Section: Sequential Learning Using the Pg-rbf Networkmentioning
confidence: 99%
“…Since recurrent networks incorporate feedback, they have powerful representation capability and can successfully overcome disadvantages of feed forward networks [8]. RBF neural networks have been used as a powerful tool in many engineering andscientific applications as they possess the following features: 1) They are universal approximators [1]; 2) They have a simple topological structure [2]; 3) They can implement fast learning algorithms because of locally tuned neurons [3]. In this study a recurrent RBF neural network is introduced where the RBF network"s ability is added to the advantages of recurrent networks.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, these approaches usually work offline, so they are not suitable for practical real-time applications where the online learning is required for the neural-network-based controller design. To remedy the aforementioned shortcomings, several growing RBF networks have been proposed in [8,11,12,18,21].…”
Section: Introductionmentioning
confidence: 99%