1987
DOI: 10.1103/physreva.35.380
|View full text |Cite
|
Sign up to set email alerts
|

Associative recall of memory without errors

Abstract: A neural network which is capable of recalling without errors any set of linearly independent patterns is studied. The network is based on a Hamiltonian version of the model of Personnaz et al.The energy of a state of N (+1}neurons is the square of the Euclidean distancein phase spacebetween the state and the linear subspace spanned by the patterns. This energy corresponds to nonlocal updatings of the synapses in the learning mode. Results of the mean-field theory (MFT) of the system as well as computer simula… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
188
1
7

Year Published

1988
1988
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 383 publications
(204 citation statements)
references
References 13 publications
8
188
1
7
Order By: Relevance
“…For instance, the pseudo-inverse rule is has a higher capacity (1 bit per synapse), but it is neither linear, nor local (Kanter and Sompolinsky (1987); Hertz et al (1991)). …”
Section: Discussionmentioning
confidence: 99%
“…For instance, the pseudo-inverse rule is has a higher capacity (1 bit per synapse), but it is neither linear, nor local (Kanter and Sompolinsky (1987); Hertz et al (1991)). …”
Section: Discussionmentioning
confidence: 99%
“…Outstanding popularity was gained by the Hopfield model (Hopfield 1982, Amit 1987 which, by the symmetry of its interactions and by its Monte Carlo dynamics, has had great appeal for physicists trained in statistical mechanics. Examples have shown, however, that similar behaviour may be reached with quite different architectures and philosophies, such as the asymmetric strict-stability models (Kohonen 1984, Personnaz er a1 1985, Kanter and Sompolinsky 1987. At present, many questions still appear to be open, concerning architectures, learning rules, storage prescriptions, etc.…”
Section: Introductionmentioning
confidence: 99%
“…, M, which can be written as C = ξξ + , where ξ + is the Moore-Penrose pseudoinverse of ξ. Some papers, based on not totally rigourous techniques and simulations, indicate that this rule allows a higher capacity than the classical Hebbian learning rule (see [12,13,19]). But theoretically, this rule involves calculating the inverse of a M × M matrix to get the pseudoinverse matrix.…”
Section: Remarks 32mentioning
confidence: 99%
“…(1) The dynamics proposed by Amari and Yanai is in fact related to the pseudo-inverse learning rule, also called projection learning rule ( [12,13,19]). The idea of this learning rule is to search a matrix C which guarantees the stability of all original patterns through the strong conditions Cξ μ = ξ μ , for all μ = 1, .…”
Section: Remarks 32mentioning
confidence: 99%