2019
DOI: 10.31234/osf.io/g5ryh
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reproducibility of Individual Effect Sizes in Meta-Analyses in Psychology

Abstract: To determine the reproducibility of psychological meta-analyses, we investigated whether we could reproduce 500 primary study effect sizes drawn from 33 published meta-analyses based on the information given in the meta-analyses, and whether recomputations of primary study effect sizes altered the overall results of the meta-analysis.

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 20 publications
0
14
0
Order By: Relevance
“…Often, the meta-analysts calculated the effect size and its SE of a primary study themselves. Analyzing data and reporting results are error prone (see, e.g., Bakker and Wicherts 2011;Gøtzsche et al 2007;Maassen et al 2020;Mathes et al 2017;Nuijten et al 2016;Petrocelli et al 2012). To minimize the risk of copying erroneously calculated or reported effect sizes and SEs, we calculated the effect sizes and SEs ourselves using data reported in the meta-analysis, where possible.…”
Section: Methodsmentioning
confidence: 99%
“…Often, the meta-analysts calculated the effect size and its SE of a primary study themselves. Analyzing data and reporting results are error prone (see, e.g., Bakker and Wicherts 2011;Gøtzsche et al 2007;Maassen et al 2020;Mathes et al 2017;Nuijten et al 2016;Petrocelli et al 2012). To minimize the risk of copying erroneously calculated or reported effect sizes and SEs, we calculated the effect sizes and SEs ourselves using data reported in the meta-analysis, where possible.…”
Section: Methodsmentioning
confidence: 99%
“…Achieving reproducibility is a basic foundation of credibility, and yet many efforts to test reproducibility reveal success rates below 100%. For example, Artner and colleagues (2020) successfully reproduced just 70% of 232 findings, and 18 of those only after deviating from the reported analysis in the paper (see also Hardwicke et al, 2018Maassen et al, 2020;Nuijten et al, 2016). Whereas an outcome reproducibility failure suggests that the original result may be wrong, a process reproducibility failure merely indicates that it cannot be verified.…”
Section: Reproducibilitymentioning
confidence: 99%
“…Moreover, a range of tools can be used in this framework to deal with heterogeneous data in an integrated view, and even check for bias in the field as a whole (e.g., Sterne et al, 2005). (Ioannidis, 2016), and recent investigations into the transparency and reproducibility of meta-analyses revealed considerable issues (Polanin, Hennessy & Tsuji, 2020;Maassen et al, 2020). This makes sense: no tool can force its handler to use it wisely.…”
Section: Qualitative Reviewsmentioning
confidence: 99%