Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Scientific reasoning ability, the ability to reason critically about the quality of scientific evidence, can help laypeople use scientific evidence when making judgments and decisions. We ask whether individuals with greater scientific reasoning ability are also better calibrated with respect to their ability, comparing calibration for skill with the more widely studied calibration for knowledge. In three studies, participants (Study 1: N = 1022; Study 2: N = 101; and Study 3: N = 332) took the Scientific Reasoning Scale (SRS; Drummond & Fischhoff, 2017), comprised of 11 true–false problems, and provided confidence ratings for each problem. Overall, participants were overconfident, reporting mean confidence levels that were 22.4–25% higher than their percentages of correct answers; calibration improved with score. Study 2 found similar calibration patterns for the SRS and another skill, the Cognitive Reflection Test (CRT), measuring the ability to avoid intuitive but incorrect answers. SRS and CRT scores were both associated with success at avoiding negative decision outcomes, as measured by the Decision Outcomes Inventory; confidence on the SRS, above and beyond scores, predicted worse outcomes. Study 3 added an alternative measure of calibration, asking participants to estimate the number of items answered correctly. Participants were less overconfident by this measure. SRS scores predicted correct usage of scientific information in a drug facts box task and holding beliefs consistent with the scientific consensus on controversial issues; confidence, above and beyond SRS scores, predicted worse drug facts box performance but stronger science‐consistent beliefs. We discuss the implications of our findings for improving science‐relevant decision‐making.
Scientific reasoning ability, the ability to reason critically about the quality of scientific evidence, can help laypeople use scientific evidence when making judgments and decisions. We ask whether individuals with greater scientific reasoning ability are also better calibrated with respect to their ability, comparing calibration for skill with the more widely studied calibration for knowledge. In three studies, participants (Study 1: N = 1022; Study 2: N = 101; and Study 3: N = 332) took the Scientific Reasoning Scale (SRS; Drummond & Fischhoff, 2017), comprised of 11 true–false problems, and provided confidence ratings for each problem. Overall, participants were overconfident, reporting mean confidence levels that were 22.4–25% higher than their percentages of correct answers; calibration improved with score. Study 2 found similar calibration patterns for the SRS and another skill, the Cognitive Reflection Test (CRT), measuring the ability to avoid intuitive but incorrect answers. SRS and CRT scores were both associated with success at avoiding negative decision outcomes, as measured by the Decision Outcomes Inventory; confidence on the SRS, above and beyond scores, predicted worse outcomes. Study 3 added an alternative measure of calibration, asking participants to estimate the number of items answered correctly. Participants were less overconfident by this measure. SRS scores predicted correct usage of scientific information in a drug facts box task and holding beliefs consistent with the scientific consensus on controversial issues; confidence, above and beyond SRS scores, predicted worse drug facts box performance but stronger science‐consistent beliefs. We discuss the implications of our findings for improving science‐relevant decision‐making.
Extensive research has been conducted to understand how accurately students monitor their studying and performance via metacognitive judgments. Moreover, the bases of students' metacognitive judgments are of interest. While previous results are quite consistent regarding the importance of performance for the accuracy of metacognitive judgments, results regarding motivational and personality variables are rather heterogeneous. This paper reports on two studies that simultaneously examined the predictive power of several performance, motivational, and personality variables on metacognitive judgments. The studies investigated a set of judgments (local and global postdictions in Study 1 and global pre-and postdictions in Study 2) and accuracy scores (bias, sensitivity, and specificity) in two different settings. Individual differences in judgments and judgment accuracy were studied via hierarchical regression analyses. Study 1 with N = 245 undergraduate students identified performance and domain-specific self-concept as relevant predictors for judgments after test taking. This was consistently found for local and global judgments. Study 2 with N = 138 undergraduate students hence focused on domain-specific self-concept and extended results to predictions. Study 2 replicated results for global postdictions but not predictions. Specifically, before task processing, students' judgments relied mostly on domain-specific self-concept but not on test performance itself. The studies indicate that different judgments and measures of judgment accuracy are needed to obtain comprehensive insights into individual differences in metacognitive monitoring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.