2009 International Joint Conference on Neural Networks 2009
DOI: 10.1109/ijcnn.2009.5178913
|View full text |Cite
|
Sign up to set email alerts
|

Feed-forward network training using optimal input gains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Learning in the ONGFE can be triggered in an autonomous way when recognizing particular events. A key capability for on-line automated learning consists of an embedded kernel with fast learning algorithms such as Optimized Conjugate Gradient and Output Weight Optimization-Backpropagation [9]. The ONGFE includes supervised and unsupervised ANN tools, where in the latter case automated clustering of input data is possible which is highly desirable when the nature of the data is not known beforehand.…”
Section: A Optimized Neuro Genetic Fast Estimator (Ongfe)mentioning
confidence: 99%
“…Learning in the ONGFE can be triggered in an autonomous way when recognizing particular events. A key capability for on-line automated learning consists of an embedded kernel with fast learning algorithms such as Optimized Conjugate Gradient and Output Weight Optimization-Backpropagation [9]. The ONGFE includes supervised and unsupervised ANN tools, where in the latter case automated clustering of input data is possible which is highly desirable when the nature of the data is not known beforehand.…”
Section: A Optimized Neuro Genetic Fast Estimator (Ongfe)mentioning
confidence: 99%
“…Although first-order full-training methods scale well, they lack affine invariance and are sensitive to the input means and gain factors [28]. To increase convergence per iteration, investigators have looked into second-order training algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…However second order methods do not scale well and suffer from heavy computational cost. Although first order methods scale better, they are sensitive to input means and gain factor [17].…”
Section: Introductionmentioning
confidence: 99%