1998
DOI: 10.1016/s0764-4469(97)89830-7
|View full text |Cite
|
Sign up to set email alerts
|

Modeling memory: what do we learn from attractor neural networks?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Indeed, Hebb's famous postulate (Hebb, 1949) that causally correlated firing of connected neurons could lead to a strengthening of the connection, was based on the suggestion that the correlated firing would be maintained in a recurrently connected cell assembly beyond the time of a transient stimulus (Hebb, 1949). Since then, analytic and computational models have demonstrated the ability of such recurrent networks to produce multiple discrete attractor states (Brunel and Nadal, 1998), as in Hopfield networks (Hopfield, 1982, 1984), or to be capable of integration over time via a marginally stable network, often termed a line attractor (Zhang, 1996; Compte et al, 2000). Much of the work on these systems has assumed either static synapses, or considered changes in synaptic strength via long-term plasticity occurring on a much slower timescale than the dynamics of neuronal responses.…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, Hebb's famous postulate (Hebb, 1949) that causally correlated firing of connected neurons could lead to a strengthening of the connection, was based on the suggestion that the correlated firing would be maintained in a recurrently connected cell assembly beyond the time of a transient stimulus (Hebb, 1949). Since then, analytic and computational models have demonstrated the ability of such recurrent networks to produce multiple discrete attractor states (Brunel and Nadal, 1998), as in Hopfield networks (Hopfield, 1982, 1984), or to be capable of integration over time via a marginally stable network, often termed a line attractor (Zhang, 1996; Compte et al, 2000). Much of the work on these systems has assumed either static synapses, or considered changes in synaptic strength via long-term plasticity occurring on a much slower timescale than the dynamics of neuronal responses.…”
Section: Introductionmentioning
confidence: 99%
“…Networks of recurrently connected units are capable of producing a diversity of distinct pointattractor states [1]. The component units of these networks can be single neurons, or groups of correlated, similarly responsive neurons.…”
Section: Circuits With Multiple Point-attractor Statesmentioning
confidence: 99%
“…A number of workers have used artificial associative memories based upon the Hopfield model in their attempts to explain the mechanisms underlying the operation both of normal and abnormal human memory, including phenomena such as pseudo-rehearsal and unlearning [21,32]. Hopfield-style networks have also been used in attempts to simulate the behaviour of human memory when subjected to various types of damage, in order to elicit information about the underlying causes of diseases such as Alzheimer's dementia and schizophrenia [8,22,[33][34][35].…”
Section: Biological Plausibilitymentioning
confidence: 99%