What gives rise to the human sense of confidence? Here, we tested the Bayesian hypothesis that confidence is based on a probability distribution represented in neural population activity. We implemented several computational models of confidence, and tested their predictions using psychophysics and fMRI. Using a generative model-based fMRI decoding approach, we extracted probability distributions from neural population activity in human visual cortex. We found that subjective confidence tracks the shape of the decoded distribution. That is, when sensory evidence was more precise, as indicated by the decoded distribution, observers reported higher levels of confidence. We furthermore found that neural activity in the insula, anterior cingulate, and prefrontal cortex was linked to both the shape of the decoded distribution and reported confidence, in ways consistent with the Bayesian model. Altogether, our findings support recent statistical theories of confidence and suggest that probabilistic information guides the computation of one’s sense of confidence.
Many daily situations require us to track multiple objects and people. This ability has traditionally been investigated in observers tracking objects in a plane. This simplification of reality does not address how observers track objects when targets move in three dimensions. Here, we study how observers track multiple objects in 2D and 3D while manipulating the average speed of the objects and the average distance between them. We show that performance declines as speed increases and distance decreases and that overall tracking accuracy is always higher in 3D than in 2D. The effects of distance and dimensionality interact to produce a more than additive improvement in performance during tracking in 3D compared to 2D. We propose an ideal observer model that uses the object dynamics and noisy observations to track the objects. This model provides a good fit to the data and explains the key findings of our experiment as originating from improved inference of object identity by adding the depth dimension.
The brain uses self-motion information to internally update egocentric representations of locations of remembered world-fixed visual objects. If a discrepancy is observed between this internal update and reafferent visual feedback, this could be either due to an inaccurate update or because the object has moved during the motion. To optimally infer the object’s location it is therefore critical for the brain to estimate the probabilities of these two causal structures and accordingly integrate and/or segregate the internal and sensory estimates. To test this hypothesis, we designed a spatial updating task involving passive whole body translation. Participants, seated on a vestibular sled, had to remember the world-fixed position of a visual target. Immediately after the translation, the reafferent visual feedback was provided by flashing a second target around the estimated “updated” target location, and participants had to report the initial target location. We found that the participants’ responses were systematically biased toward the position of the second target position for relatively small but not for large differences between the “updated” and the second target location. This pattern was better captured by a Bayesian causal inference model than by alternative models that would always either integrate or segregate the internally updated target location and the visual feedback. Our results suggest that the brain implicitly represents the posterior probability that the internally updated estimate and the visual feedback come from a common cause and uses this probability to weigh the two sources of information in mediating spatial constancy across whole body motion. NEW & NOTEWORTHY When we move, egocentric representations of object locations require internal updating to keep them in register with their true world-fixed locations. How does this mechanism interact with reafferent visual input, given that objects typically do not disappear from view? Here we show that the brain implicitly represents the probability that both types of information derive from the same object and uses this probability to weigh their contribution for achieving spatial constancy across whole body motion.
Author Summary 35A change of an object's position on our retina can be caused by a change of the object's location 36 in the world or by a movement of the eye and body. Here, we examine how the brain solves 37 this problem for spatial updating by assessing the probability that the internally-updated 38 location during body motion and observed retinal feedback after the motion stems from the 39 same object location in the world. Guided by Bayesian causal inference model, we demonstrate 40 that participants' errrors in spatial updating depend nonlinearly on the spatial discrepancy 41 between internally-updated and reafferent visual feedback about the object's location in the 42 world. We propose that the brain implicitly represents the probability that the internally updated 43 estimate and the sensory feedback come from a common cause, and use this probability to 44 weigh the two sources of information in mediating spatial constancy across whole-body motion. 45
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.