Uncertainty is intrinsic to perception. Neural circuits which process sensory information must therefore also represent the reliability of this information. How they do so is a topic of debate. We propose a model of visual cortex in which average neural response strength encodes stimulus features, while cross-neuron variability in response gain encodes the uncertainty of these features. To test this model, we studied spiking activity of neurons in macaque V1 and V2 elicited by repeated presentations of stimuli whose uncertainty was manipulated in distinct ways. We show that gain variability of individual neurons is tuned to stimulus uncertainty, that this tuning is specific to the features encoded by these neurons and largely invariant to the source of uncertainty. We demonstrate that this behavior naturally arises from known gain-control mechanisms, and illustrate how downstream circuits can jointly decode stimulus features and their uncertainty from sensory population activity.
The nervous system achieves stable perceptual representations of objects despite large variations in the activity patterns of sensory receptors. Here, we explore perceptual constancy in the sense of touch. Specifically, we investigate the invariance of tactile texture perception across changes in scanning speed. Texture signals in the nerve have been shown to be highly dependent on speed: temporal spiking patterns in nerve fibers that encode fine textural features contract or dilate systematically with increases or decreases in scanning speed, respectively, resulting in concomitant changes in response rate. Nevertheless, texture perception has been shown, albeit with restricted stimulus sets and limited perceptual assays, to be independent of scanning speed. Indeed, previous studies investigated the effect of scanning speed on perceived roughness, only one aspect of texture, often with impoverished stimuli, namely gratings and embossed dot patterns. To fill this gap, we probe the perceptual constancy of a wide range of textures using two different paradigms: one that probes texture perception along well-established sensory dimensions independently and one that probes texture perception as a whole. We find that texture perception is highly stable across scanning speeds, irrespective of the texture or the perceptual assay. Any speed-related effects are dwarfed by differences in percepts evoked by different textures. This remarkable speed invariance of texture perception stands in stark contrast to the strong dependence of the texture responses of nerve fibers on scanning speed. Our results imply neural mechanisms that compensate for scanning speed to achieve stable representations of surface texture. Our brain forms stable representations of objects regardless of viewpoint, a phenomenon known as invariance that has been described in several sensory modalities. Here, we explore invariance in the sense of touch and show that the tactile perception of texture does not depend on scanning speed. This perceptual constancy implies neural mechanisms that extract information about texture from the response of nerve fibers such that the resulting neural representation is stable across speeds.
Decisions vary in difficulty. Humans know this and typically report more confidence in easy than in difficult decisions. However, confidence reports do not perfectly track decision accuracy, but also reflect response biases and difficulty misjudgments. To isolate the quality of confidence reports, we developed a model of the decision-making process underlying choice-confidence data. In this model, confidence reflects a subject's estimate of the reliability of their decision. The quality of this estimate is limited by the subject's uncertainty about the uncertainty of the variable that informs their decision ("meta-uncertainty"). This model provides an accurate account of choice-confidence data across a broad range of perceptual and cognitive tasks, revealing that meta-uncertainty varies across subjects, is stable over time, generalizes across some domains, and can be manipulated experimentally. The model offers a parsimonious explanation for the computational processes that underlie and constrain the sense of confidence.Humans are aware of the fallibility of perception and cognition. When we experience a high degree of confidence in a perceptual or cognitive decision, that decision is more likely to be correct than when we feel less confident 1 . This "metacognitive" ability helps us to learn from mistakes 2 , to plan future actions 3 , and to optimize group decision-making 4 . There is a long-standing interest in the mental operations underlying our sense of confidence [5][6][7] , and the rapidly expanding field of metacognition seeks to understand how metacognitive ability varies across domains 8 , individuals 9 , clinical states 10 , and development 11 .Quantifying a subject's ability to introspect about the correctness of a decision is a challenging problem [12][13][14] . There exists no generally agreed-upon method 15 . Even in the simplest decision-making tasks, several distinct factors influence a subject's confidence reports. Consider a subject jointly reporting a binary decision about a sensory stimulus (belongs to "Category A"
Decisions vary in difficulty. Humans know this and typically report more confidence in easy than in difficult decisions. However, confidence reports do not perfectly track decision accuracy, but also reflect response biases and difficulty misjudgments. To isolate the quality of confidence reports, we developed a model of the decision-making process underlying choice-confidence data. In this model, confidence reflects a subject's estimate of the reliability of their decision. The quality of this estimate is limited by the subject's uncertainty about the uncertainty of the variable that informs their decision ("meta-uncertainty"). This model provides an accurate account of choice-confidence data across a broad range of perceptual and cognitive tasks, revealing that meta-uncertainty varies across subjects, is stable over time, generalizes across some domains, and can be manipulated experimentally. The model offers a parsimonious explanation for the computational processes that underlie and constrain the sense of confidence.
Uncertainty is intrinsic to perception. Neural circuits which process sensory information must therefore also represent the reliability of this information. How they do so is a topic of debate. We propose a view of visual cortex in which average neural response strength encodes stimulus features, while cross-neuron variability in response gain encodes the uncertainty of these features. To test our theory, we studied spiking activity of neurons in macaque V1 and V2 elicited by repeated presentations of stimuli whose uncertainty was manipulated in distinct ways. We show that gain variability of individual neurons is tuned to stimulus uncertainty, that this tuning is invariant to the source of uncertainty, and that it is specific to the features encoded by these neurons. We demonstrate that this behavior naturally arises from known gain-control mechanisms, and derive how downstream circuits can jointly decode stimulus features and their uncertainty from sensory population activity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.