2012
DOI: 10.1073/pnas.1117386109
|View full text |Cite|
|
Sign up to set email alerts
|

Fundamental limits on persistent activity in networks of noisy neurons

Abstract: Neural noise limits the fidelity of representations in the brain. This limitation has been extensively analyzed for sensory coding. However, in short-term memory and integrator networks, where noise accumulates and can play an even more prominent role, much less is known about how neural noise interacts with neural and network parameters to determine the accuracy of the computation. Here we analytically derive how the stored memory in continuous attractor networks of probabilistically spiking neurons will degr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

16
208
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 131 publications
(227 citation statements)
references
References 53 publications
16
208
0
Order By: Relevance
“…Alternatively, maintenance and manipulation of analog information can be achieved using line attractor dynamics [36, 37], even without persistent input. But it is widely thought that line attractor networks require fine tuning to avoid memory leak and that even small amounts of memory noise would accumulate to dominate the representation over prolonged timescales [19, 20, 23, 38]. …”
Section: Discussionmentioning
confidence: 99%
“…Alternatively, maintenance and manipulation of analog information can be achieved using line attractor dynamics [36, 37], even without persistent input. But it is widely thought that line attractor networks require fine tuning to avoid memory leak and that even small amounts of memory noise would accumulate to dominate the representation over prolonged timescales [19, 20, 23, 38]. …”
Section: Discussionmentioning
confidence: 99%
“…In such circuits a particularly strong fluctuation can kick the activity out of an “undecided” state into one of the decision states, such a transition being more likely to the decision state with greater underlying activation. The advantage of such attractor-based methods is that internal noise, which limits the usefulness of perfect integration [82] (it accumulates as Brownian motion in standard integration models of decision making), can actually boost performance by attractor transitions, generating a decision within an appropriate time window, in a method resembling stochastic resonance [78]. …”
Section: Computational Models With Attractor-state Itinerancymentioning
confidence: 99%
“…S9). This di↵usivity exceeds, by a factor of 20-50, the predicted value in a matched model (see 40 , SI S10 and Fig. S10), Fig.…”
Section: Evidence Of Input Aligned To Manifoldmentioning
confidence: 70%