2020
DOI: 10.1080/08870446.2020.1757098
|View full text |Cite
|
Sign up to set email alerts
|

Knowing how effective an intervention, treatment, or manipulation is and increasing replication rates: accuracy in parameter estimation as a partial solution to the replication crisis

Abstract: Objective: Although basing conclusions on confidence intervals for effect size estimates is preferred over relying on null hypothesis significance testing alone, confidence intervals in psychology are typically very wide. One reason may be a lack of easily applicable methods for planning studies to achieve sufficiently tight confidence intervals. This paper presents tables and freely accessible tools to facilitate planning studies for the desired accuracy in parameter estimation for a common effect size (Cohen… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(22 citation statements)
references
References 50 publications
0
22
0
Order By: Relevance
“…However, the number of observations per event surpassed the pre-registration. The confidence intervals reported above show that the proportion of events visitors wearing ear plugs could be accurately estimated, but there is room for improvement regarding the predictor in the multilevel models (Peters and Crutzen 2020). The relative increase compared to the control condition and NNT are deemed impactful given the limited resources needed for the intervention activities.…”
Section: Discussionmentioning
confidence: 98%
“…However, the number of observations per event surpassed the pre-registration. The confidence intervals reported above show that the proportion of events visitors wearing ear plugs could be accurately estimated, but there is room for improvement regarding the predictor in the multilevel models (Peters and Crutzen 2020). The relative increase compared to the control condition and NNT are deemed impactful given the limited resources needed for the intervention activities.…”
Section: Discussionmentioning
confidence: 98%
“…The reliability of outcome measures limits the range of standardised effect sizes which can be expected (although it can also increase their variance in small samples (Loken & Gelman, 2017), which is vital information for study design. Assuming constant underlying true scores, outcome measures with lower reliability have diminished power, meaning that more participants are required to reach the same conclusions, that the resulting parameter estimates are less precise (Peters & Crutzen, 2018), and that there is an increased risk of type M (magnitude) and type S (sign) errors (Gelman & Carlin, 2014). In extreme cases, if measured values are too dissimilar to the underlying ‘true’ values relative to any actual true differences between individuals, then a statistical test will have little to no possibility to infer meaningful outcomes: this has been analogised as “Gelman’s kangaroo” (Gelman, 2015; Wagenmakers & Gronau, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Once the MCD-based Cohen's d is established, the required sample size can be computed using power or AIPE computations described in a previous section (see also Cohen, 1988;Cumming, 2014;Faul et al, 2007;Peters and Crutzen, 2020). To illustrate the ball park, Table 2 shows the required sample sizes for control event rates of 5%, 25% and 50%, and MCDs of 2.5%, 5%, 10%, and 25%, the required sample sizes to obtain 80% and 95% power and to estimate the intervention effect with 95% confidence interval half-widths of d¼.10 and d¼.25.…”
Section: Step 5: Plan Your Sample Sizementioning
confidence: 99%
“…First, if only one study has been conducted, given the small sample sizes and low statistical power that have long been customary in the psychological literature (Marszalek et al, 2011), the effect size estimate obtained in that study likely originates from an exceedingly wide sampling distribution. In other words, the observed effect size in that single study is to a large degree arbitrary, may differ considerably in the next direct replication (Peters & Crutzen, 2020), and as such does not provide a solid starting point for inferences about the likely population effect size (see also . If multiple studies have been conducted and their effect size estimates are extracted from the extant literature, meta-analysis may yield a more reliable estimate.…”
mentioning
confidence: 99%