2021
DOI: 10.1002/jrsm.1529
|View full text |Cite
|
Sign up to set email alerts
|

Retrospective median power, false positive meta‐analysis and large‐scale replication

Abstract: Recent, high‐profile, large‐scale, preregistered failures to replicate uncover that many highly‐regarded experiments are “false positives”; that is, statistically significant results of underlying null effects. Large surveys of research reveal that statistical power is often low and inadequate. When the research record includes selective reporting, publication bias and/or questionable research practices, conventional meta‐analyses are also likely to be falsely positive. At the core of research credibility lies… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 94 publications
(272 reference statements)
0
24
0
Order By: Relevance
“…A recent paper by Stanley et al 42 focuses on the actual power (which they refer to as the retrospective power). They estimate the actual power of a number of studies from meta‐analyses of comparable studies.…”
Section: Discussionmentioning
confidence: 99%
“…A recent paper by Stanley et al 42 focuses on the actual power (which they refer to as the retrospective power). They estimate the actual power of a number of studies from meta‐analyses of comparable studies.…”
Section: Discussionmentioning
confidence: 99%
“…Statistical power, then, can tell us how many statistically significant effects we should expect to find based on random sampling alone. Here, we calculate power, retrospectively, from meta-analysis (Ioannidis et al, 2017;Stanley et al, 2018Stanley et al, , 2022. Each reported standard error, SE i , is the estimate of the standard deviation of the sampling distribution for that study, and meta-analysis allows us to estimate the mean effect size with UWLS.…”
Section: The Power Of Excess Statistical Significancementioning
confidence: 99%
“…Each reported standard error, SE i , is the estimate of the standard deviation of the sampling distribution for that study, and meta-analysis allows us to estimate the mean effect size with UWLS. Retrospective power for each study i , then, can be calculated as: where N() denotes the cumulative standard normal probability and the conventional .05 level of significance is assumed (Ioannidis et al, 2017; Stanley et al, 2018, 2022). Retrospective power calculated in this manner is “conservative” in the sense that it tends to overestimate power and thereby underestimates ESS, thereby erring on the side of the unbiasedness and satisfactory quality of the research record 7 .…”
Section: Weighted and Iterated Least Squaresmentioning
confidence: 99%
“…Note that publication bias and p-hacking are observationally equivalent, so for parsimony we will use the term publication bias to describe both, as is common in the meta-analysis literature. Many studies have recently discussed how publication bias can exaggerate empirical estimates in economics (Brodeur et al, 2016;Bruns & Ioannidis, 2016;Card et al, 2018;Christensen & Miguel, 2018;DellaVigna et al, 2019;Blanco-Perez & Brodeur, 2020;Brodeur et al, 2020;Ugur et al, 2020;Xue et al, 2020;Neisser, 2021;Stanley et al, 2021;DellaVigna & Linos, 2022;Stanley et al, 2022), and the exaggeration can be twofold or more (Ioannidis et al, 2017). Publication bias is natural, common in economics, and does not imply cheating or any ulterior motives on the part of the researchers.…”
Section: Calibrated Estimatedmentioning
confidence: 99%