1982
DOI: 10.1007/bf00275687
|View full text |Cite
|
Sign up to set email alerts
|

Simplified neuron model as a principal component analyzer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

17
1,364
0
15

Year Published

1993
1993
2012
2012

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 2,116 publications
(1,396 citation statements)
references
References 10 publications
17
1,364
0
15
Order By: Relevance
“…This statement is interpreted mainly in two different ways. First, a change in the wiring is possible only if both of the connected neurons are active and, thus, correlated (Oja 1982;Bienenstock et al 1982). The second interpretation is that a change should depend only on information that is locally available, that is, the activity of the two neurons and the weight itself (Gerstner and Kistler 2002;Tetzlaff et al 2011).…”
Section: Long-term Plasticitymentioning
confidence: 99%
“…This statement is interpreted mainly in two different ways. First, a change in the wiring is possible only if both of the connected neurons are active and, thus, correlated (Oja 1982;Bienenstock et al 1982). The second interpretation is that a change should depend only on information that is locally available, that is, the activity of the two neurons and the weight itself (Gerstner and Kistler 2002;Tetzlaff et al 2011).…”
Section: Long-term Plasticitymentioning
confidence: 99%
“…The network settles in two phases, an expectation (minus) phase where the network's actual output is produced, and an outcome (plus) phase where the target output is experienced, and then computes a simple difference of a pre and postsynaptic activation product across these two phases. For Hebbian learning, Leabra uses essentially the same learning rule used in competitive learning or mixtures-of-Gaussians which can be seen as a variant of the Oja normalization (Oja, 1982). The error-driven and Hebbian learning components are combined additively at each connection to produce a net weight change.…”
Section: A4 Hebbian and Error-driven Learningmentioning
confidence: 99%
“…The function f is introduced to keep the weights w~ finite and has to be chosen in a way so that rule (6) extracts the principal component of G. Examples are given by Oja (1982) as f(w) = wrGw and by Yuille et al (1989) as f(w) = wrw, where w r is the transpose of the vector w. The result of a training of the network with these learning rules is an arbitrary weight vector in the subspace spanned by the eigenvectors to 468 the maximal eigenvalue of the correlation matrix G. The behaviour of this model system can therefore be analyzed by simply calculating the principal component vector (vectors) of G for different parameter sets which are introduced in the next subsection.…”
Section: Dynamics and Learnandg Rulementioning
confidence: 99%
“…Again there are no restrictions to the receptive field size and in this model a modified Oja's learning rule was used (Oja 1982) which was shown to lead to cells performing principal component analysis to their input stimuli (which is not the case for Linsker's rule).…”
Section: Introductionmentioning
confidence: 99%