Proceedings 5th Brazilian Symposium on Neural Networks (Cat. No.98EX209)
DOI: 10.1109/sbrn.1998.730989
|View full text |Cite
|
Sign up to set email alerts
|

Optimising the widths of radial basis functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0
1

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 37 publications
(31 citation statements)
references
References 4 publications
0
30
0
1
Order By: Relevance
“…Their experimental outcomes have shown that the nonsymmetric partition can lead to the development of more accurate RBF models, with a smaller number of hidden layer nodes. More elaborate methods have been suggested [58][59][60][61] for optimizing the RBF widths in order to improve approximation accuracy. Taking advantage of the linear connection between the hidden and output layer, most training algorithms calculate the synaptic weights of RBF networks by applying linear regression of the output of the hidden units on the target values.…”
Section: Orthogonal Least Squares (Ols)mentioning
confidence: 99%
“…Their experimental outcomes have shown that the nonsymmetric partition can lead to the development of more accurate RBF models, with a smaller number of hidden layer nodes. More elaborate methods have been suggested [58][59][60][61] for optimizing the RBF widths in order to improve approximation accuracy. Taking advantage of the linear connection between the hidden and output layer, most training algorithms calculate the synaptic weights of RBF networks by applying linear regression of the output of the hidden units on the target values.…”
Section: Orthogonal Least Squares (Ols)mentioning
confidence: 99%
“…Basically, there are two categories of learning algorithms proposed for RBFNNs. The first category of learning algorithms simply places one RBF at each sample [10]. If all the training samples are selected as hidden centers, the generalization capability of the network will become so poor that many noised or deformed samples will not be recognized.…”
Section: Related Workmentioning
confidence: 99%
“…This domain has been widely used in RBNN literature [5,8,17]. A random sampling with an uniform distribution over the interval ½ 4; 4 is used in order to obtain 40 input output points for the training data.…”
Section: Domains Descriptionmentioning
confidence: 99%