2021
DOI: 10.1093/nc/niab040
|View full text |Cite
|
Sign up to set email alerts
|

Measuring metacognitive performance: type 1 performance dependence and test-retest reliability

Abstract: Research on metacognition—thinking about thinking—has grown rapidly and fostered our understanding of human cognition in healthy individuals and clinical populations. Of central importance is the concept of metacognitive performance, which characterizes the capacity of an individual to estimate and report the accuracy of primary (type 1) cognitive processes or actions ensuing from these processes. Arguably one of the biggest challenges for measures of metacognitive performance is their dependency on objective … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

2
32
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(60 citation statements)
references
References 53 publications
2
32
1
Order By: Relevance
“…In this light, state-oriented individuals might only express lower confidence while not truly experiencing it. Such report bias [ 121 ] could be driven by factors including modesty or excessive self-monitoring [ 122 ]. Finally, a more mechanistic account of metacognitive bias on cognitive and neural levels is needed to understand how the confidence gap arises.…”
Section: Discussionmentioning
confidence: 99%
“…In this light, state-oriented individuals might only express lower confidence while not truly experiencing it. Such report bias [ 121 ] could be driven by factors including modesty or excessive self-monitoring [ 122 ]. Finally, a more mechanistic account of metacognitive bias on cognitive and neural levels is needed to understand how the confidence gap arises.…”
Section: Discussionmentioning
confidence: 99%
“…We found that this model provides an excellent account of choice-confidence data reported in a large set of previously published studies 2328 . Our analysis suggests that meta-uncertainty provides a better metric for metacognitive ability than the non-process-model based alternatives that currently prevail in the literature 13,15 . Specifically, meta-uncertainty has higher test-retest reliability, is less affected by discrimination ability and response bias, and has comparable cross-domain generalizability.…”
mentioning
confidence: 79%
“…It has long been known that humans and other animals can meaningfully introspect about the quality of their decisions and actions [5][6][7]31,48 . Quantifying this ability has remained a significant challenge, even for simple binary decision-making tasks 12,13,15,28,40,41 . The core problem is that observable choice-confidence data reflect metacognitive ability as well as task difficulty and response bias.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the literature on confidence judgments, two main methods are used to achieve such control. The first is to use a metacognitive sensitivity measure that takes into account first-order performance such as the meta-d'/d' ratio (Fleming, 2017;Maniscalco & Lau, 2012) also known as metacognitive efficiency even though small dependencies between d' and the meta-d'/d' ratio also exists particularly for low first-order performance (Guggenmos, 2021) which can be problematic with respect to the memory impairment in aging. However, as gamma is mainly influenced by bias and guessing and as guessing plays a very much reduced role in recall, we suggest that gamma remains a reliable measure in this context, such as for JOLs and Ease-Of-Learning.…”
Section: Discussionmentioning
confidence: 99%