2014
DOI: 10.1177/1745691614528519
|View full text |Cite|
|
Sign up to set email alerts
|

Safeguard Power as a Protection Against Imprecise Power Estimates

Abstract: An essential first step in planning a confirmatory or a replication study is to determine the sample size necessary to draw statistically reliable inferences using power analysis. A key problem, however, is that what is available is the sample-size estimate of the effect size, and its use can lead to severely underpowered studies when the effect size is overestimated. As a potential remedy, we introduce safeguard power analysis, which uses the uncertainty in the estimate of the effect size to achieve a better … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
173
2
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 173 publications
(178 citation statements)
references
References 45 publications
1
173
2
2
Order By: Relevance
“…Moreover, because a meta-analysis can provide a better estimate of this effect size than any one study, it can more accurately inform power analyses for future studies. Because psychology studies are often underpowered (Bakker, van Dijk, & Wicherts, 2012), initial estimates of effect sizes are often inflated (Button et al, 2013); when power analyses for future studies use these inflated effect size estimates, they are also likely to be underpowered and thus less likely to replicate an effect even if the effect is real (Button et al, 2013;Perugini, Gallucci, & Constantini, 2014). Meta-analyses can help address this problem.…”
Section: The Present Meta-analysismentioning
confidence: 99%
“…Moreover, because a meta-analysis can provide a better estimate of this effect size than any one study, it can more accurately inform power analyses for future studies. Because psychology studies are often underpowered (Bakker, van Dijk, & Wicherts, 2012), initial estimates of effect sizes are often inflated (Button et al, 2013); when power analyses for future studies use these inflated effect size estimates, they are also likely to be underpowered and thus less likely to replicate an effect even if the effect is real (Button et al, 2013;Perugini, Gallucci, & Constantini, 2014). Meta-analyses can help address this problem.…”
Section: The Present Meta-analysismentioning
confidence: 99%
“…6 These analyses were executed using Rouder et al's (2009) online calculator (http://pcl.missouri.edu/bf-two-sample) using the default scaling factor of r=1 and relevant t-values and ns (i.e., n1=47, n2=24, and t=2.35 for Correll's (2008) data and n1=198, n2=98, and t=.277 for our combined data). 7 To further bolster our position, we also executed a safeguard-poweranalysis (Perugini, Gallucci, & Costantini, 2014) on our combined sample to rule out concerns regarding imprecision in our power calculations due to the noisy effect size estimate in Correll's (2008) original study. This analysis revealed that we required an N=232 to reliably detect (80 % power) a lower bound effect size (d s =.37) of Correll's observed effect size of d=.59 (R code for this analysis is available at https://osf.io/fejxb/ in "evaluating-replication-results.R").…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, 40% of studies failed to report all experimental conditions, 70% of studies failed to report all outcome variables included in questionnaires, and the reported effect sizes were almost twice as large and three times more likely to be statistically significant compared to unreported effect sizes (Franco, Malhotra, & Simonovits, 2015). Additionally, O'Boyle, Banks, and Gonazalez-Mulé (Asendorpf et al, 2013;Bakker et al, 2012;Button et al, 2013;Fraley & Vazire, 2014;Ioannadis, 2005Ioannadis, , 2012Lakens & Evers, 2014;Lucas & Donnellan, 2013;Nosek et al, 2012;Pashler & Harris, 2012;Perugini, Gallucci, & Costantini, 2014;Schimmack & Dinolfo, 2013;Simons, 2014).…”
Section: Epistemological Concerns Of Adopting More Open Science Practmentioning
confidence: 99%
“…Underscoring the importance of executing sufficiently powered-studies, Perugini, Gallucci, and Costantini (2014) have even proposed a "safeguard power analysis" approach that overcomes the serious problem that effect size estimates from original studies are noisy and virtually always over-estimated due to publication bias (Simonsohn, 2013). The logic of this approach is to calculate power based on a lower bound of the original effect size estimate, which "safeguards" a researcher in the event the true effect size is indeed lower than is reported in the original published study.…”
Section: Epistemological Concerns Of Adopting More Open Science Practmentioning
confidence: 99%