We recently found a positive relationship between estimates of metacognitive efficiency and metacognitive bias. However, this relationship was only examined on a within-subject level and required binarizing the confidence scale, a technique that introduces methodological difficulties. Here we examined the robustness of the positive relationship between estimates of metacognitive efficiency and metacognitive bias by conducting two different types of analyses. First, we developed a new within-subject analysis technique where the original n-point confidence scale is transformed into two different (n-1)-point scales in a way that mimics a naturalistic change in confidence. Second, we examined the across-subject correlation between metacognitive efficiency and metacognitive bias. Importantly, for both types of analyses, we not only established the direction of the effect but also computed effect sizes. We applied both techniques to the data from three tasks from the Confidence Database (N > 400 in each). We found that both approaches revealed a small to medium positive relationship between metacognitive efficiency and metacognitive bias. These results demonstrate that the positive relationship between metacognitive efficiency and metacognitive bias is robust across several analysis techniques and datasets, and have important implications for future research.
It is now widely appreciated that confidence ratings are corrupted by metacognitive noise. We recently demonstrated that the level of metacognitive noise increases as the evidence for a decision becomes stronger. This effect leads to the prediction that metacognitive sensitivity and metacognitive bias are confounded such that increasing one’s confidence should result in higher estimated metacognitive sensitivity. In order to test this predicted relationship, we developed a new method to simulate a change in confidence by removing the confidence criteria separating either the highest or the lowest confidence ratings. The method enables us to manipulate a subject’s metacognitive bias after data have already been collected. We applied this manipulation to the data from three tasks from the Confidence Database (N > 400 in each) and found that simulating a bias towards higher confidence indeed led to higher estimates of metacognitive sensitivity. These results provide support for the notion that metacognitive noise increases with decision evidence, and point to an important confound between metacognitive sensitivity and metacognitive bias.
Appropriate perceptual decision making necessitates the accurate estimation and use of sensory uncertainty. Such estimation has been studied in the context of both low-level multisensory cue combination and metacognitive estimation of confidence, but it remains unclear whether the same computations underlie both sets of uncertainty estimation. We created visual stimuli with low vs. high overall motion energy, such that the high-energy stimuli led to higher confidence but lower accuracy in a visual-only task. Importantly, we tested the impact of the low- and high-energy visual stimuli on auditory motion perception in a separate task. Despite being irrelevant to the auditory task, both visual stimuli impacted auditory judgments presumably via automatic low-level mechanisms. Critically, we found that the high-energy visual stimuli influenced the auditory judgments more strongly than the low-energy visual stimuli. This effect was in line with the confidence but contrary to the accuracy differences between the high- and low-energy stimuli in the visual-only task. These effects were captured by a simple computational model that assumes common computational principles underlying both confidence reports and multisensory cue combination. Our results reveal a deep link between automatic sensory processing and metacognitive confidence reports, and suggest that vastly different stages of perceptual decision making rely on common computational principles.
Human behavior is known to be idiosyncratic, yet research in neuroscience typically assumes a universal brain-behavior relationship. Here we test this assumption by estimating the level of idiosyncrasy in individual brain-behavior maps obtained using human neuroimaging. We first show that task-based activation maps are both stable within an individual and similar across people. Critically, although behavior-based activation maps are also stable within an individual, they strongly diverge across people. A computational model that jointly generates brain activity and behavior explains these results and reveals that within-person factors have much larger effect than group factors in determining behavior-based activations. These findings demonstrate that unlike task-based activity that is mostly similar among people, the relation between brain activity and behavioral outcomes is largely idiosyncratic. Thus, contrary to popular assumptions, group-level behavior-based maps reveal relatively little about each individual.
Knowing when confidence computations take place is critical for building mechanistic understanding of the neural and computational bases of metacognition. Yet, even though substantial amount of research has focused on revealing the neural correlates and computations underlying human confidence judgments, very little is known about the timing of confidence computations. Subjects judged the orientation of a briefly presented visual stimulus and provided a confidence rating regarding the accuracy of their decision. We delivered single pulses of transcranial magnetic stimulation (TMS) at different times after stimulus presentation. TMS was delivered to either dorsolateral prefrontal cortex (DLPFC) in the experimental group or to vertex in the control group. We found that TMS to DLPFC, but not to vertex, led to increased confidence in the absence of changes to accuracy or metacognitive ability. Critically, equivalent levels of confidence increase occurred for TMS delivered between 200 and 500 ms after stimulus presentation. These results suggest that confidence computations occur during a broad window that begins before the perceptual decision has been fully made and thus provide important constraints for theories of confidence generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.