Human metacognition, or the capacity to introspect on one's own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and postdecisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated among auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities. Metacognitive monitoring is the capacity to access, report, and regulate one's own mental states. In perception, this allows rating our confidence in what we have seen, heard, or touched. Although metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates and by the influence of decisional cues on confidence estimates.
What aspects of neuronal activity distinguish the conscious from the unconscious brain? This has been a subject of intense interest and debate since the early days of neurophysiology. However, as any practicing anesthesiologist can attest, it is currently not possible to reliably distinguish a conscious state from an unconscious one on the basis of brain activity. Here we approach this problem from the perspective of dynamical systems theory. We argue that the brain, as a dynamical system, is self-regulated at the boundary between stable and unstable regimes, allowing it in particular to maintain high susceptibility to stimuli. To test this hypothesis, we performed stability analysis of high-density electrocorticography recordings covering an entire cerebral hemisphere in monkeys during reversible loss of consciousness. We show that, during loss of consciousness, the number of eigenmodes at the edge of instability decreases smoothly, independently of the type of anesthetic and specific features of brain activity. The eigenmodes drift back toward the unstable line during recovery of consciousness. Furthermore, we show that stability is an emergent phenomenon dependent on the correlations among activity in different cortical regions rather than signals taken in isolation. These findings support the conclusion that dynamics at the edge of instability are essential for maintaining consciousness and provide a novel and principled measure that distinguishes between the conscious and the unconscious brain.
Human peripheral vision appears vivid compared to foveal vision; the subjectively perceived level of detail does not seem to drop abruptly with eccentricity. This compelling impression contrasts with the fact that spatial resolution is substantially lower at the periphery. A similar phenomenon occurs in visual attention, in which subjects usually overestimate their perceptual capacity in the unattended periphery. We have previously shown that at identical eccentricity, low spatial attention is associated with liberal detection biases, which we argue may reflect inflated subjective perceptual qualities. Our computational model suggests that this subjective inflation occurs because under the lack of attention, the trial-by-trial variability of the internal neural response is increased, resulting in more frequent surpassing of a detection criterion. In the current work, we hypothesized that the same mechanism may be at work in peripheral vision. We investigated this possibility in psychophysical experiments in which participants performed a simultaneous detection task at the center and at the periphery. Confirming our hypothesis, we found that participants adopted a conservative criterion at the center and liberal criterion at the periphery. Furthermore, an extension of our model predicts that detection bias will be similar at the center and at the periphery if the periphery stimuli are magnified. A second experiment successfully confirmed this prediction. These results suggest that, although other factors contribute to subjective inflation of visual perception in the periphery, such as top-down filling-in of information, the decision mechanism may be relevant too.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.