2003
DOI: 10.1016/j.jphysparis.2004.01.022
|View full text |Cite
|
Sign up to set email alerts
|

Optimal computation with attractor networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
38
0

Year Published

2004
2004
2012
2012

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(39 citation statements)
references
References 21 publications
1
38
0
Order By: Relevance
“…However, it is much less clear whether any continuous attractor network could match the performance of an ideal estimator in tracking its own instantaneous attractor state. It has been shown previously that continuous attractor networks, when evolving deterministically (noise free), can estimate the location of a bump input from a sample of spikes in a Bayes-optimal way (30). The memory network faces a more difficult task because it needs to continuously estimate a state that is drifting, and its dynamics as a readout network are noisy.…”
Section: Resultsmentioning
confidence: 99%
“…However, it is much less clear whether any continuous attractor network could match the performance of an ideal estimator in tracking its own instantaneous attractor state. It has been shown previously that continuous attractor networks, when evolving deterministically (noise free), can estimate the location of a bump input from a sample of spikes in a Bayes-optimal way (30). The memory network faces a more difficult task because it needs to continuously estimate a state that is drifting, and its dynamics as a readout network are noisy.…”
Section: Resultsmentioning
confidence: 99%
“…Recent research in computational neuroscience points out the importance of continuous attractors [63,35]. Consider [22] a nonlinear neural network model…”
Section: ✷ Example 23mentioning
confidence: 99%
“…This parameter controls the width of the weight pattern, and as a result, the width of the pattern of activity in the basis function layer. The optimal value can be inferred from our previous analytical work (Deneve et al, 2001;Latham et al, 2003). In both object and arm tracking, we obtained optimal performance for K w equal to 3.…”
Section: Mathematical Formalizationmentioning
confidence: 71%
“…As long as sensory gains are adjusted accordingly and the condition for optimality derived by Deneve et al (1999) and Latham et al (2003) is verified, the network approximates the performance of a Kalman filter.…”
Section: Mathematical Formalizationmentioning
confidence: 92%
See 1 more Smart Citation