1989
DOI: 10.1209/0295-5075/10/4/018
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Version of the Pseudo-Inverse Solution for Classification and Neural Networks

Abstract: We present in this letter a noniterative learning rule for classification and neural networks, which allows to eliminate the drawback of overfitting of the pseudo-inverse (PI) solution and to preserve good learning performances. This solution, which is obtained by artificially increasing the number of patterns in the learning set, is a parametric form between the pseudo-inverse and the Hebb solutions. The results are compared to each other and with those of a gradient descent iterative procedure on two very di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

1990
1990
2014
2014

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…The scaling factor λ was chosen to allow for a good compromise between learning and generalization. Its precise value was optimized for each analysis as this value depended on the population size and number of training trials (see [34] for the λ optimization procedure).…”
Section: Methodsmentioning
confidence: 99%
“…The scaling factor λ was chosen to allow for a good compromise between learning and generalization. Its precise value was optimized for each analysis as this value depended on the population size and number of training trials (see [34] for the λ optimization procedure).…”
Section: Methodsmentioning
confidence: 99%