2019
DOI: 10.31234/osf.io/3bdfu
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Multi-faceted Mess: A Systematic Review of Statistical Power Analysis in Psychology Journal Articles

Abstract: The over-reliance on the null hypothesis significance testing framework and its accompanying tools has recently been challenged. An example of such a tool is statistical power analysis, which is used to determine how many participants are required to detect a minimally meaningful effect size in the population at a given level of power and Type I error rate. To investigate how power analysis is currently used, this study reviews the reporting of 443 power analyses in high impact psychology journals in 2016 and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…The STAR and Pathways samples are existing datasets, thus, a sensitivity power analysis was used to calculate the minimally detectable effect sizes (MDES) given the sample sizes for all statistical analyses (Cribbie, Beribisky, & Alter, 2019;Giner-Sorolla et al, 2019). This provides some context for why we see different rates of significance across the studies for given effect sizes.…”
Section: Sensitivity Power Analysesmentioning
confidence: 99%
“…The STAR and Pathways samples are existing datasets, thus, a sensitivity power analysis was used to calculate the minimally detectable effect sizes (MDES) given the sample sizes for all statistical analyses (Cribbie, Beribisky, & Alter, 2019;Giner-Sorolla et al, 2019). This provides some context for why we see different rates of significance across the studies for given effect sizes.…”
Section: Sensitivity Power Analysesmentioning
confidence: 99%
“…However, the problem is researchers rarely justify their choice of effect size (Bakker et al, 2020;Cribbie et al, 2019), shedding no insight into the thought process behind their power analysis.…”
Section: Effect Sizementioning
confidence: 99%
“…Power analysis is not the only way to justify your sample size (see Lakens, 2021), but despite increased attention to statistical power, it is still rare to find articles that justified their sample size through power analysis (Chen & Liu, 2019;Guo et al, 2014;Larson & Carbine, 2017). Even for those that do report a power analysis, there are often other problems such as poor justification for the effect size, misunderstanding statistical power, or not making the power analysis reproducible (Bakker et al, 2020;Collins & Watt, 2021;Cribbie et al, 2019). Therefore, we present a tutorial which outlines the key decisions behind power analysis and walk through two of the simplest applications: two independent samples and two dependent samples.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The seven studies used in this manuscript are existing datasets, thus, a sensitivity power analysis was used to calculate the range of minimally detectable effect sizes (MDES) given the sample sizes across the proposed correlations (Cribbie, Beribisky, & Alter, 2019;Giner-Sorolla et al, 2019). Across all proposed moderators, the highest sample size was n = 634 and the smallest sample size was n = 71.…”
Section: Sensitivity Power Analysesmentioning
confidence: 99%