2022
DOI: 10.3389/fpsyg.2022.812963
|View full text |Cite
|
Sign up to set email alerts
|

Sophisticated Statistics Cannot Compensate for Method Effects If Quantifiable Structure Is Compromised

Abstract: Researchers rely on psychometric principles when trying to gain understanding of unobservable psychological phenomena disconfounded from the methods used. Psychometric models provide us with tools to support this endeavour, but they are agnostic to the meaning researchers intend to attribute to the data. We define method effects as resulting from actions which weaken the psychometric structure of measurement, and argue that solution to this confounding will ultimately rest on testing whether data collected fit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 65 publications
(103 reference statements)
0
11
0
Order By: Relevance
“…Provision of feedback designed to change performance is one of the defining features of the dynamic testing paradigm ( Guthke and Beckmann 2000 ), but other forms of prompting may also change the way people approach problems. While the intention of such manipulations is to focus assessment on dynamic processes rather than static ones, an important theoretical implication of interpositions is that they may impact the validity of the assessment in unintended ways ( Birney et al 2022 ; Double and Birney 2019 ). Careful theorising and experimentation are necessary to ensure validity claims can be defended.…”
Section: Part 3: Theory Through Task Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…Provision of feedback designed to change performance is one of the defining features of the dynamic testing paradigm ( Guthke and Beckmann 2000 ), but other forms of prompting may also change the way people approach problems. While the intention of such manipulations is to focus assessment on dynamic processes rather than static ones, an important theoretical implication of interpositions is that they may impact the validity of the assessment in unintended ways ( Birney et al 2022 ; Double and Birney 2019 ). Careful theorising and experimentation are necessary to ensure validity claims can be defended.…”
Section: Part 3: Theory Through Task Analysismentioning
confidence: 99%
“…The same criticism would apply to an overuse of “shrinkage” in MLM regression models if this was observed to occur. Relying on sophisticated statistical tools to “purify” our measures from what are ultimately method-effects ( Birney et al 2022 ) reflects how little we understand about the sources of impurity (e.g., unreliability or multidimensionality) in our measures ( van der Maas et al 2017 ). We would do better to improve our measures using strong theory and better linked conceptual and empirical models, rather than make dubiously justified statistical adjustments.…”
Section: Implications and Final Considerationsmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, Rasch-scaled measurement can provide scores that are norm-independent and that are invariant with respect to given item content (e.g., Stemler and Naples 2021 ). More generally, the field of assessment is moving toward theory-based assessments that could serve as bases of CRT and other innovations ( Birney et al 2022 ; Broers 2021 ; Kellen et al 2021 ). Whatever theory is used probably needs to consider that adaptive intelligence is multidimensional, at the very least with regard to the dimensions that underlie even narrowly defined intelligence ( Carroll 1993 ), creativity, and wisdom ( Sternberg and Glück 2019 ).…”
Section: Discussionmentioning
confidence: 99%