2020
DOI: 10.1038/s41562-020-00976-8
|View full text |Cite
|
Sign up to set email alerts
|

Calibrating the experimental measurement of psychological attributes

Abstract: Experimental psychologists often seek to measure latent psychological attributes. A plethora of observation and data transformation methods are being used simultaneously in any given field, with few criteria to arbitrate between them. In this theoretical note, we extend classical validity theory by suggesting to use intended values in a benchmark experiment as singular validity criterion, which we term retrodictive validity. We formally introduce a statistical model of this situation. We mathematically derive … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

5
5

Authors

Journals

citations
Cited by 38 publications
(48 citation statements)
references
References 43 publications
0
48
0
Order By: Relevance
“…Third, only structural MRI data were examined; functional imaging during threat learning could reveal additional correlates of learning circuitry ( Homan et al, 2019 ), although the delivery of aversive stimuli to participants with anxiety in the MRI scanner presents a challenge ( Thorpe et al, 2008 ). Fourth, SCR data were analyzed using a single method, which, while established, relies only on directly-observable effects; future studies may consider using novel, computational analysis methods which could potentially reveal effects not observed using the current method ( Bach et al, 2018 ; Bach and Friston, 2013 ; Bach et al, 2020 ; Ojala and Bach, 2020 ). Along these lines, a multiverse approach may be used in future work to comprehensively compare multiple methods of quantifying threat learning.…”
Section: Discussionmentioning
confidence: 99%
“…Third, only structural MRI data were examined; functional imaging during threat learning could reveal additional correlates of learning circuitry ( Homan et al, 2019 ), although the delivery of aversive stimuli to participants with anxiety in the MRI scanner presents a challenge ( Thorpe et al, 2008 ). Fourth, SCR data were analyzed using a single method, which, while established, relies only on directly-observable effects; future studies may consider using novel, computational analysis methods which could potentially reveal effects not observed using the current method ( Bach et al, 2018 ; Bach and Friston, 2013 ; Bach et al, 2020 ; Ojala and Bach, 2020 ). Along these lines, a multiverse approach may be used in future work to comprehensively compare multiple methods of quantifying threat learning.…”
Section: Discussionmentioning
confidence: 99%
“…Compared with peak-scoring methods for SCR, this approach has been shown to better discriminate between responses to aversive and neutral stimuli ( Bach et al 2009 ; Bach and Friston 2013 ) and to better discriminate between responses to CS+ and CS− in fear conditioning ( Bach et al 2010 ). This better discrimination implies better accuracy and precision than standard SCR analysis in inferences on the latent CS–US association ( Bach et al 2020 ).…”
Section: Methodsmentioning
confidence: 99%
“…Establishing the latter may require a clearer view on the costs of pre-clinical trials, which are dictated by achievable effect sizes. These can be computed from behavioural variability in (untreated) control groups [ 127 , 128 ], and define statistical power for a human screening trial. Some promising steps into this direction have been taken for the NPU task [ 129 ].…”
Section: Synopsismentioning
confidence: 99%