2016
DOI: 10.1177/0956797616647519
|View full text |Cite
|
Sign up to set email alerts
|

Researchers’ Intuitions About Power in Psychological Research

Abstract: Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
132
1
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(141 citation statements)
references
References 44 publications
(70 reference statements)
5
132
1
3
Order By: Relevance
“…One's perception of what practices are defensible is likely influenced by one's understanding of the consequences of questionable practices. But surveys consistently find that researchers generally have inadequate or incorrect understandings of statistical concepts relevant to their work (e.g., Bakker et al, 2016;Gigerenzer, 2004;Greenland et al, 2016;Tversky & Kahneman, 1971). How does one square this with the fact that, for decades, there have been clear, consistent, and repeated warnings about low power (e.g., Cohen, 1962Cohen, , 1969Cohen, , 1988Lane & Dunlap, 1978) and selective reporting (e.g., Greenwald, 1975;Rosenthal, 1975;Sterling, 1959)?…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…One's perception of what practices are defensible is likely influenced by one's understanding of the consequences of questionable practices. But surveys consistently find that researchers generally have inadequate or incorrect understandings of statistical concepts relevant to their work (e.g., Bakker et al, 2016;Gigerenzer, 2004;Greenland et al, 2016;Tversky & Kahneman, 1971). How does one square this with the fact that, for decades, there have been clear, consistent, and repeated warnings about low power (e.g., Cohen, 1962Cohen, , 1969Cohen, , 1988Lane & Dunlap, 1978) and selective reporting (e.g., Greenwald, 1975;Rosenthal, 1975;Sterling, 1959)?…”
Section: Resultsmentioning
confidence: 99%
“…-The Talking Cricket, shortly before Pinocchio kills him with a hammer (Collodi, 1883) The slowness, if not resistance, of psychological scientists to adopt better statistical and methodological practices has been well-documented (see, e.g., Cohen, 1994Cohen, , 1990Cumming et al, 2007;Fidler et al, 2004;Gigerenzer, 2004;Sharpe, 2013). Despite the long existence of an extensive methodological literature identifying poor scientific practices and proposing solutions (e.g., Cohen, 1969Cohen, , 1962Meehl, 1978;Sterling, 1959), researchers frequently misunderstand and inadequately address statistical concepts fundamental to their methodologies, such as power (Bakker et al, 2016;Tversky &Kahneman, 1971) andp-values (Gigerenzer, 2004;Greenland et al, 2016). Separately but relatedly, meta-scientists have also documented the alarmingly wide prevalence of questionable research practices in psychology (and other sciences), such as selective reporting, data peeking, unplanned statistical analyses, and hypothesizing after the results are known (see, e.g., Agnoli et al, 2017;Bakker et al, 2012;Fraser et al, 2018;John, Lowenstein, & Prelec, 2012;Kerr, 1998;Simmons, Nelson, & Simonsohn, 2011).…”
Section: Trouble In the Land Of Toysmentioning
confidence: 99%
“…We also wanted to ensure small effects could be detected at both time points in the event that we experienced lower than expected re-response rates (Bakker et al 2016;Daly and Nataraajan 2015). In total, the Time 1 survey was completed by 1516 participants.…”
Section: Methods Participants and Proceduresmentioning
confidence: 99%
“…Previous concern about power (Cohen, 1962;Sedlmeier, & Gigerenzer, 1989;Marszalek, Barber, Kohlhart, & Holmes, 2011;Bakker, van Dijk, & Wicherts, 2012), which was even addressed by an APA Statistical Task Force in 1999 that recommended increased statistical power (Wilkinson, 1999), seems not to have resulted in actual change (Marszalek, Barber, Kohlhart, & Holmes, 2011). Potential explanations for this lack of change is that researchers overestimate statistical power when designing a study for small effects (Bakker, Hartgerink, Wicherts, & van der Maas, 2016), use p-hacking to artificially increase statistical power, and can act strategically by running multiple underpowered studies rather than one large powerful study (Bakker, van Dijk, & Wicherts, 2012). The effects of p-hacking are likely to be the most pervasive, with many people admitting to using such behaviors at some point (John, Loewenstein, & Prelec, 2012) and publication bias pushing researchers to find statistically significant results.…”
Section: Discussionmentioning
confidence: 99%