Efcient coding has been proposed as a rst principle explaining neuronal response properties in the central nervous system. The shape of optimal codes, however, strongly depends on the natural limitations of the particular physical system. Here we investigate how optimal neuronal encoding strategies are inuenced by the nite number of neurons N (place constraint), the limited decoding time window length T (time constraint), the maximum neuronal ring rate f max (power constraint), and the maximal average rate h f i max (energy constraint). While Fisher information provides a general lower bound for the mean squared error of unbiased signal reconstruction, its use to characterize the coding precision is limited. Analyzing simple examples, we illustrate some typical pitfalls and thereby show that Fisher information provides a valid measure for the precision of a code only if the dynamic range ( f min T, f max T) is sufciently large. In particular, we demonstrate that the optimal width of gaussian tuning curves depends on the available decoding time T. Within the broader class of unimodal tuning functions, it turns out that the shape of a Fisher-optimal coding scheme is not unique. We solve this ambiguity by taking the minimum mean square error into account, which leads to at tuning curves. The tuning width, however, remains to be determined by energy constraints rather than by the principle of efcient coding.
Recent experimental and theoretical work has established the hypothesis that cortical neurons operate close to a critical state which describes a phase transition from chaotic to ordered dynamics. Critical dynamics are suggested to optimize several aspects of neuronal information processing. However, although critical dynamics have been demonstrated in recordings of spontaneously active cortical neurons, little is known about how these dynamics are affected by task-dependent changes in neuronal activity when the cortex is engaged in stimulus processing. Here we explore this question in the context of cortical information processing modulated by selective visual attention. In particular, we focus on recent findings that local field potentials (LFPs) in macaque area V4 demonstrate an increase in γ-band synchrony and a simultaneous enhancement of object representation with attention. We reproduce these results using a model of integrate-and-fire neurons where attention increases synchrony by enhancing the efficacy of recurrent interactions. In the phase space spanned by excitatory and inhibitory coupling strengths, we identify critical points and regions of enhanced discriminability. Furthermore, we quantify encoding capacity using information entropy. We find a rapid enhancement of stimulus discriminability with the emergence of synchrony in the network. Strikingly, only a narrow region in the phase space, at the transition from subcritical to supercritical dynamics, supports the experimentally observed discriminability increase. At the supercritical border of this transition region, information entropy decreases drastically as synchrony sets in. At the subcritical border, entropy is maximized under the assumption of a coarse observation scale. Our results suggest that cortical networks operate at such near-critical states, allowing minimal attentional modulations of network excitability to substantially augment stimulus representation in the LFPs.
The speed and reliability of mammalian perception indicate that cortical computations can rely on very few action potentials per involved neuron. Together with the stochasticity of single-spike events in cortex, this appears to imply that large populations of redundant neurons are needed for rapid computations with action potentials. Here we demonstrate that very fast and precise computations can be realized also in small networks of stochastically spiking neurons. We present a generative network model for which we derive biologically plausible algorithms that perform spike-by-spike updates of the neuron's internal states and adaptation of its synaptic weights from maximizing the likelihood of the observed spike patterns. Paradigmatic computational tasks demonstrate the online performance and learning efficiency of our framework. The potential relevance of our approach as a model for cortical computation is discussed.
Many experimental studies concerning the neuronal code are based on graded responses of neurons, given by the emitted number of spikes measured in a certain time window. Correspondingly, a large body of neural network theory deals with analogue neuron models and discusses their potential use for computation or function approximation. All physical signals, however, are of limited precision, and neuronal firing rates in cortex are relatively low. Here, we investigate the relevance of analogue signal processing with spikes in terms of optimal stimulus reconstruction and information theory. In particular, we derive optimal tuning functions taking the biological constraint of limited firing rates into account. It turns out that depending on the available decoding time T, optimal encoding undergoes a phase transition from discrete binary coding for small T towards analogue or quasi-analogue encoding for large T. The corresponding firing rate distributions are bimodal for all relevant T, in particular in the case of population coding.
Selective attention improves perception and modulates neuronal responses, but how attention-dependent changes of cortical activity improve the processing of attended objects is an open question. Changes in total signal strength or enhancements in signal-to-noise ratio have been proposed as putative mechanisms. However, it is still not clear whether, and to what extent, these processes contribute to the large perceptual improvements. We studied the ability to discriminate states of activity in visual cortex evoked by differently shaped objects depending on selective attention in monkeys. We found that gamma-band activity from V4 and V1 contains a high amount of information about stimulus shape, which increases for V4 recordings considerably with attention in successful trials, but not in case of behavioral errors. This effect resulted from enhanced differences between the stimulus-specific distributions of power spectral amplitudes. It could be explained neither by enhancements of signal-to-noise ratios, nor by changes in total signal power. Instead our results indicate that attention causes underlying cortical network states to become more distinct for different stimuli, providing a new neurophysiological explanation for improvements of behavioral performance by attention. The absence of the enhancement in discriminability in trials with behavioral errors demonstrates the relevance of this novel neural mechanism for perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.