2023
DOI: 10.1038/s41467-023-37777-2
|View full text |Cite
|
Sign up to set email alerts
|

Calibration of cognitive tests to address the reliability paradox for decision-conflict tasks

Abstract: Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This reliability paradox has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aim to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting info… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 59 publications
(71 reference statements)
0
0
0
Order By: Relevance
“…The first conclusion was at the methodological level. It was proposed that the typical tasks or measures should be abandoned, and new ways of measuring attentional control should be developed (e.g., Burgoyne et al, 2023;Draheim et al, 2021Draheim et al, , 2024Kucina et al, 2023;Martin et al, 2020;Rey-Mermet & Rothen, 2023a, 2023b. In contrast, the second conclusion was at the conceptual level.…”
Section: Beyond the Methodological Challengesmentioning
confidence: 99%
“…The first conclusion was at the methodological level. It was proposed that the typical tasks or measures should be abandoned, and new ways of measuring attentional control should be developed (e.g., Burgoyne et al, 2023;Draheim et al, 2021Draheim et al, , 2024Kucina et al, 2023;Martin et al, 2020;Rey-Mermet & Rothen, 2023a, 2023b. In contrast, the second conclusion was at the conceptual level.…”
Section: Beyond the Methodological Challengesmentioning
confidence: 99%
“…We anticipated that the Pavlovian bias in choice and reaction times would be modulated along with the change in uncontrollability, according to the model simulation experiments presented earlier. The virtual reality environment improves ecological validity [Parsons, 2015] and introduces gamification, which is known to also improve reliability of studies [Sailer et al, 2017, Kucina et al, 2023, Zorowitz et al, 2023, which is important in attempts to uncover potentially subtle biases. We used a hierarchical Bayesian estimation of model parameters, to increase the reliability across tasks (including the Go-No Go tasks [Zorowitz et al, 2023]).…”
Section: Experiments 3: Human Approach-withdrawal Conditioning Is Mod...mentioning
confidence: 99%
“…To derive a more accurate estimation of the average split-half reliability for each SPE measure, we synthesized these reliability coefficients via a meta-analytical approach. We weighed the reliability coefficients based on the trial numbers of each study since the number of trials typically significantly influences the reliability of cognitive experiments (Kucina et al, 2023) (see also Supplementary Fig. S7 for our exploratory analysis).…”
Section: Estimating the Reliability Of Spementioning
confidence: 99%
“…Third, we did not explicitly state in the preregistration report that we would perform a weighted average of the split-half reliabilities for all datasets. However, considering the significant impact of the number of trials on reliability (Kucina et al, 2023), during the formal analysis, we assigned different weights to each study based on the number of trials. Subsequently, we calculated a weighted average of the split-half reliabilities.…”
Section: Deviation From Preregistrationmentioning
confidence: 99%
See 1 more Smart Citation