In retina and in cortical slice the collective response of spiking neural populations is well described by "maximum-entropy" models in which only pairs of neurons interact. We asked, how should such interactions be organized to maximize the amount of information represented in population responses? To this end, we extended the linear-nonlinear-Poisson model of single neural response to include pairwise interactions, yielding a stimulus-dependent, pairwise maximum-entropy model. We found that as we varied the noise level in single neurons and the distribution of network inputs, the optimal pairwise interactions smoothly interpolated to achieve network functions that are usually regarded as discrete-stimulus decorrelation, error correction, and independent encoding. These functions reflected a trade-off between efficient consumption of finite neural bandwidth and the use of redundancy to mitigate noise. Spontaneous activity in the optimal network reflected stimulus-induced activity patterns, and single-neuron response variability overestimated network noise. Our analysis suggests that rather than having a single coding principle hardwired in their architecture, networks in the brain should adapt their function to changing noise and stimulus correlations.adaptation | neural networks | Ising model | attractor states P opulations of sensory neurons encode information about stimuli into sequences of action potentials, or spikes (1). Experiments with pairs or small groups of neurons have observed many different coding strategies (2-6): (i) independence, where each neuron responds independently to the stimulus, (ii) decorrelation, where neurons interact to give a decorrelated representation of the stimulus, (iii) error correction, where neurons respond redundantly, in patterns, to combat noise, and (iv) synergistic coding, where population activity patterns carry information unavailable from separate neurons.How should a network arrange its interactions to best represent an ensemble of stimuli? Theoretically, there has been controversy over what is the "correct" design principle for neural population codes (7-11). On the one hand, neurons have a limited repertoire of response patterns, and information is maximized by using each neuron to represent a different aspect of the stimulus. To achieve this, interactions in a network should be organized to remove correlations in network inputs and thus create a decorrelated network response. On the other hand, neurons are noisy, and noise is combatted via redundancy, where different patterns related by noise encode the same stimulus. To achieve this, interactions in a network should be organized to exploit existing correlations in neural inputs to compensate for noise-induced errors. Such a trade-off between decorrelation and noise reduction possibly accounts for the organization of several biological information processing systems, e.g., the adaptation of center-surround receptive fields to ambient light intensity (12-14), the structure of retinal ganglion cell mosaics (15-18),...
Grid cells in the brain respond when an animal occupies a periodic lattice of ‘grid fields’ during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.DOI: http://dx.doi.org/10.7554/eLife.08362.001
We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human interaction plays a key role in our method; but effort is minimized and streamlined via a graphical interface. We illustrate our method on data from guinea pig retinal ganglion cells and document its performance on simulated data consisting of spikes added to experimentally measured background noise. We present several tests demonstrating that the algorithm is highly accurate: it exhibits low error rates on fits to synthetic data, low refractory violation rates, good receptive field coverage, and consistency across users.
The visual system is challenged with extracting and representing behaviorally relevant information contained in natural inputs of great complexity and detail. This task begins in the sensory periphery: retinal receptive fields and circuits are matched to the first and second-order statistical structure of natural inputs. This matching enables the retina to remove stimulus components that are predictable (and therefore uninformative), and primarily transmit what is unpredictable (and therefore informative). Here we show that this design principle applies to more complex aspects of natural scenes, and to central visual processing. We do this by classifying highorder statistics of natural scenes according to whether they are uninformative vs. informative. We find that the uninformative ones are perceptually nonsalient, while the informative ones are highly salient, and correspond to previously identified perceptual mechanisms whose neural basis is likely central. Our results suggest that the principle of efficient coding not only accounts for filtering operations in the sensory periphery, but also shapes subsequent stages of sensory processing that are sensitive to high-order image statistics.natural scene statistics | psychophysics | vision M any aspects of early visual processing appear to be shaped by a necessity for efficient representation of the information in natural stimuli. Examples include: (i) the center-surround receptive field of the retinal ganglion cell, which removes spatial correlations in natural images and decreases retinal redundancy (1-3), (ii) the twofold excess of retinal OFF pathways (encoding negative contrasts) as compared to ON pathways (encoding positive contrasts), which matches the asymmetric contrast structure of natural scenes (4), (iii) cone spectral sensitivities and color opponency in ganglion cells, which maximize chromatic information from natural scenes (5-7), (iv) overlaps of ganglion cell receptive fields within the retinal mosaic, which balance redundancy reduction against signal-to-noise ratio improvement (8, 9), and (v) the shapes of the nonlinear response functions of early sensory neurons, and their adaptation to stimulus variance, which have been related to the skewed intensity distributions that occur in natural stimuli (10,11). In all cases, physiological and anatomical characteristics of the visual system are accounted for by a simple efficient coding principle: sensory systems invest their resources in relation to the expected gain in information (4).All these examples refer to first-order image statistics (the distribution of light intensities at single pixels) or simple secondorder image statistics (covariances of light intensities at pairs of pixels), and to processing within the retina. It is unknown whether such an explanatory framework extends to more complex image statistics, or to central visual processing. There are two reasons for this gap in knowledge. First, higher-order image statistics are challenging to analyze, because of their complexity and high di...
Across the nervous system, certain population spiking patterns are observed far more frequently than others. A hypothesis about this structure is that these collective activity patterns function as population codewords–collective modes–carrying information distinct from that of any single cell. We investigate this phenomenon in recordings of ∼150 retinal ganglion cells, the retina’s output. We develop a novel statistical model that decomposes the population response into modes; it predicts the distribution of spiking activity in the ganglion cell population with high accuracy. We found that the modes represent localized features of the visual stimulus that are distinct from the features represented by single neurons. Modes form clusters of activity states that are readily discriminated from one another. When we repeated the same visual stimulus, we found that the same mode was robustly elicited. These results suggest that retinal ganglion cells’ collective signaling is endowed with a form of error-correcting code–a principle that may hold in brain areas beyond retina.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.