2002
DOI: 10.1162/08997660260293247
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Short-Term Population Coding: When Fisher Information Fails

Abstract: Efcient coding has been proposed as a rst principle explaining neuronal response properties in the central nervous system. The shape of optimal codes, however, strongly depends on the natural limitations of the particular physical system. Here we investigate how optimal neuronal encoding strategies are inuenced by the nite number of neurons N (place constraint), the limited decoding time window length T (time constraint), the maximum neuronal ring rate f max (power constraint), and the maximal average rate h f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
114
0

Year Published

2004
2004
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(120 citation statements)
references
References 29 publications
6
114
0
Order By: Relevance
“…This setting is very close to the standard transform coding setting, 42 as it essentially replaces the quantization by Gaussian noise. From previous work on optimal population coding, we can expect even larger changes in the shape of optimal neural image representations by choosing a Poisson noise model and a nonlinear 47,48,63 In the above list of examples, we have focused on the aspect of coding efficiency, but other objectives besides coding efficiency can be optimized as well. The crucial point in optimal representation learning is that the objective function really define the criterion according to which one would like to judge the performance of the representation.…”
Section: Optimal Representation Learning: Unsupervised Learning Mementioning
confidence: 99%
See 1 more Smart Citation
“…This setting is very close to the standard transform coding setting, 42 as it essentially replaces the quantization by Gaussian noise. From previous work on optimal population coding, we can expect even larger changes in the shape of optimal neural image representations by choosing a Poisson noise model and a nonlinear 47,48,63 In the above list of examples, we have focused on the aspect of coding efficiency, but other objectives besides coding efficiency can be optimized as well. The crucial point in optimal representation learning is that the objective function really define the criterion according to which one would like to judge the performance of the representation.…”
Section: Optimal Representation Learning: Unsupervised Learning Mementioning
confidence: 99%
“…In this way, we obtained 10ϫ 63 2 = 39,690 samples. For the test set, we got the same number of samples from the same ten images by choosing those 63 2 patches that tile the center part of each image instead of the upper left. That is, a margin of 8 pixel width is left out at all four edges of the given image in this case.…”
Section: Appendix C: Sampling Schemementioning
confidence: 99%
“…Singlepeaked place fields are analogous to the tuning curves for orientation in visual and motor cortices, for which the questions of neuronal coding and optimal tuning widths have been investigated extensively (Paradiso, 1988;Seung & Sompolinsky, 1993;Brunel & Nadal, 1998;Zhang & Sejnowski, 1999;Pouget, Deneve, Ducom, & Latham, 1999;Bethge, Rotermund, & Pawelzik, 2002;Brown & Bäcker, 2006;Bobrowski, Meir, & Eldar, 2009). Theoretical studies on the coding properties of grid cells (Burak, Brookings, & Fiete, 2006;Fiete, Burak, & Brookings, 2008) have dealt with the spatial range encoded by populations of grid cells, without assuming an explicit noise model.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of neural population coding, many authors have calculated the Fisher information (Paradiso, 1988;Seung & Sompolinsky, 1993;Brunel & Nadal, 1998;Zhang & Sejnowski, 1999;Pouget et al, 1999;Eurich & Wilke, 2000;Wilke & Eurich, 2002;Bethge et al, 2002;Brown & Bäcker, 2006). However, it is also known that no such estimator will attain the lower bound if the neurons have Poisson spike statistics and the expected number of spikes is low, even when a neuron is firing at its maximal rate (Bethge et al, 2002). In other words, if the product of the firing rate f max and the time window T for counting spikes obeys f max T ≈ 1, the Fisher information greatly exaggerates the true spatial resolution of the population code.…”
Section: Introductionmentioning
confidence: 99%
“…sigmoidally) have received little attention. A notable exception is [3,4], which maximizes Fisher information, and considers only Poisson variability. In contrast, we maximize mutual information, and consider quasi-Poisson variability.…”
Section: Discussionmentioning
confidence: 99%