The brain maintains internal models of its environment to interpret sensory inputs and prepare actions. While behavioral studies demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.
Human perception has recently been characterized as statistical inference based on noisy and ambiguous sensory inputs. Moreover, suitable neural representations of uncertainty have been identified that could underlie such probabilistic computations. In this review, we argue that learning an internal model of the sensory environment is another key aspect of the same statistical inference procedure and thus perception and learning need to be treated jointly. We review evidence for statistically optimal learning in humans and animals, and reevaluate possible neural representations of uncertainty based on their potential to support statistically optimal learning. We propose that spontaneous activity can have a functional role in such representations leading to a new, samplingbased, framework of how the cortex represents information and uncertainty. Probabilistic perception, learning and representation of uncertainty: in need of a unifying approachOne of the longstanding computational principles in neuroscience is that the nervous system of animals and humans is adapted to the statistical properties of the environment [1]. This principle is reflected across all organizational levels, ranging from the activity of single neurons to networks and behavior, and it has been identified as key to the survival of organisms [2]. Such adaptation takes place on at least two distinct behaviorally relevant time scales: on the time scale of immediate inferences, as a moment-by-moment processing of sensory input (perception), and on a longer time scale by learning from experience. Although statistically optimal perception and learning have most often been considered in isolation, here we promote them as two facets of the same underlying principle and treat them together under a unified approach.Although there is considerable behavioral evidence that humans and animals represent, infer and learn about the statistical properties of their environment efficiently [3], and there is also NIH-PA Author ManuscriptNIH-PA Author Manuscript NIH-PA Author Manuscript converging theoretical and neurophysiological work on potential neural mechanisms of statistically optimal perception [4], there is a notable lack of convergence from physiological and theoretical studies explaining whether and how statistically optimal learning might occur in the brain. Moreover, there is a missing link between perception and learning: there exists virtually no crosstalk between these two lines of research focusing on common principles and on a unified framework down to the level of neural implementation. With recent advances in understanding the bases of probabilistic coding and the accumulating evidence supporting probabilistic computations in the cortex, it is now possible to take a closer look at both the basis of probabilistic learning and its relation to probabilistic perception.We first provide a brief overview of the theoretical framework as well as behavioral and neural evidence for representing uncertainty in perceptual processes. To highlight th...
SummaryNeural responses in the visual cortex are variable, and there is now an abundance of data characterizing how the magnitude and structure of this variability depends on the stimulus. Current theories of cortical computation fail to account for these data; they either ignore variability altogether or only model its unstructured Poisson-like aspects. We develop a theory in which the cortex performs probabilistic inference such that population activity patterns represent statistical samples from the inferred probability distribution. Our main prediction is that perceptual uncertainty is directly encoded by the variability, rather than the average, of cortical responses. Through direct comparisons to previously published data as well as original data analyses, we show that a sampling-based probabilistic representation accounts for the structure of noise, signal, and spontaneous response variability and correlations in the primary visual cortex. These results suggest a novel role for neural variability in cortical dynamics and computations.
SummaryCorrelated variability in cortical activity is ubiquitously quenched following stimulus onset, in a stimulus-dependent manner. These modulations have been attributed to circuit dynamics involving either multiple stable states (“attractors”) or chaotic activity. Here we show that a qualitatively different dynamical regime, involving fluctuations about a single, stimulus-driven attractor in a loosely balanced excitatory-inhibitory network (the stochastic “stabilized supralinear network”), best explains these modulations. Given the supralinear input/output functions of cortical neurons, increased stimulus drive strengthens effective network connectivity. This shifts the balance from interactions that amplify variability to suppressive inhibitory feedback, quenching correlated variability around more strongly driven steady states. Comparing to previously published and original data analyses, we show that this mechanism, unlike previous proposals, uniquely accounts for the spatial patterns and fast temporal dynamics of variability suppression. Specifying the cortical operating regime is key to understanding the computations underlying perception.
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lowerlevel features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input.Bayesian inference ͉ probabilistic modeling ͉ vision
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.