Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This “reliability paradox” has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure inhibitory capacity. We aimed to address this phenomenon by implementing gamified tasks and combined versions of the standard tests. A series of experiments concluded that a Flanker task and a combined Simon and Stroop task produced reliable estimates of individual differences in under 100 trials per task. In comparison to archival Flanker, Simon, and Stroop data, these two task variations demonstrated both efficient and reliable measurement. Having both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out, our novel approach offers improved estimation of such differences in a way that is both practical and easily implemented.
Acute exercise generally benefits memory but little research has examined how exercise affects metacognition (knowledge of memory performance). We show that a single bout of exercise can influence metacognition in paired-associate learning. Participants completed 30-min of moderate-intensity exercise before or after studying a series of word pairs ( cloud - ivory ), and completed cued-recall ( cloud -?; Experiments 1 & 2) and recognition memory tests ( cloud -? spoon; ivory ; drill ; choir ; Experiment 2). Participants made judgments of learning prior to cued-recall tests (JOLs; predicted likelihood of recalling the second word of each pair when shown the first) and feeling-of-knowing judgments prior to recognition tests (FOK; predicted likelihood of recognizing the second word from four alternatives). Compared to no-exercise control conditions, exercise before encoding enhanced cued-recall in Experiment 1 but not Experiment 2 and did not affect recognition. Exercise after encoding did not influence memory. In conditions where exercise did not benefit memory, it increased JOLs and FOK judgments relative to accuracy (Experiments 1 & 2) and impaired the relative accuracy of JOLs (ability to distinguish remembered from non-remembered items; Experiment 2). Acute exercise seems to signal likely remembering; this has implications for understanding the effects of exercise on metacognition, and for incorporating exercise into study routines.
Presenting a blank lineup-containing only fillers-to witnesses prior to showing a real lineup might be useful for screening out those who pick from the blank lineup as unreliable witnesses. We show that the effectiveness of this procedure varies depending on instructions given to witnesses. Participants (N = 462) viewed a simulated crime and attempted to identify the perpetrator from a lineup approximately one week later. Rejecting a blank lineup was associated with greater identification accuracy and greater diagnosticity of suspect identifications, but only when witnesses were instructed prior to the blank lineup that they would view a series of lineups; the procedure was ineffective for screening when witnesses were advised they would view two lineups or received no instruction. These results highlight the importance of instructions used in the blank lineup procedure, and the need for better understanding of how to interpret choosing patterns in this paradigm.
Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. Over five experiments, we show that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make these tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.