2021
DOI: 10.1080/00461520.2021.1898962
|View full text |Cite
|
Sign up to set email alerts
|

Strengthening the foundation of educational psychology by integrating construct validation into open science reform

Jessica Kay Flake

Abstract: as a part of a special issue titled, "Educational psychology in the open science era."This preprint has been peer reviewed, but not copy edited by the journal and may differ from the final published version. The DOI of the final published version is

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 54 publications
0
38
0
Order By: Relevance
“…Rather, measures show greater or lesser evidence of validity for a certain use on a certain population within a certain context (Gehlbach, 2015). So if validity can only properly be assessed after data describing the participants and their context are collected and analyzed, how will open-science practices like preregistration and registered reports that require a priori assumptions work (Flake, 2021/ By no means are these the only challenges for educational psychologists in adopting and adapting open-science practices. Other challenges arise around specific practices.…”
Section: Additional Challenges and Future Tensionsmentioning
confidence: 99%
“…Rather, measures show greater or lesser evidence of validity for a certain use on a certain population within a certain context (Gehlbach, 2015). So if validity can only properly be assessed after data describing the participants and their context are collected and analyzed, how will open-science practices like preregistration and registered reports that require a priori assumptions work (Flake, 2021/ By no means are these the only challenges for educational psychologists in adopting and adapting open-science practices. Other challenges arise around specific practices.…”
Section: Additional Challenges and Future Tensionsmentioning
confidence: 99%
“…However, ad hoc measures are commonly used without any evidence of their validity or without any reasoned justification as to why they are appropriate (Flake & Fried, 2020). Accordingly, they may appear face valid, but can be problematic because they do not pass through a rigorous process of test construction and psychometric analysis to formally establish their validity (Flake, 2021; Flake et al, 2017). Given that study conclusions depend on the valid measurement of the studied constructs, the use of unvalidated and unjustified ad hoc measures has even been described as a “questionable measurement practice” (Flake & Fried, 2020, p. 458).…”
Section: Measuring Goal Dimensionsmentioning
confidence: 99%
“…Given that study conclusions depend on the valid measurement of the studied constructs, the use of unvalidated and unjustified ad hoc measures has even been described as a “questionable measurement practice” (Flake & Fried, 2020, p. 458). Questionable measurement practices have been claimed to be common in many subdisciplines of psychology, including social, personality, and educational psychology (Flake, 2021; Flake et al, 2017). Our impression that there seems to be little agreement on how to measure goal dimensions and the claim that questionable measurement practices are common in psychology gave us reasons to review whether such practices are present in the measurement of goals.…”
Section: Measuring Goal Dimensionsmentioning
confidence: 99%
“…With direct relevance for open science practices, however, more focused attention on the "art" of replication seems warranted. For example, how does one identify studies worthy of replication (see, Makel et al, 2012), and to what extent should measurement issues inform these decisions (Flake, 2021/this issue)? In addition, how does one ascertain the reasons for failures to replicate: was the original study or the replication flawed; how many replications are necessary before a finding is considered proven or disproven?…”
Section: Strengths and Limitations Of Replication As A Tool For Open Science Reformmentioning
confidence: 99%