2015
DOI: 10.1037/a0039191
|View full text |Cite
|
Sign up to set email alerts
|

Opportunistic biases: Their origins, effects, and an integrated solution.

Abstract: Researchers commonly explore their data in multiple ways before deciding which analyses they will include in the final versions of their papers. While this improves the chances of researchers finding publishable results, it introduces an "opportunistic bias," such that the reported relations are stronger or otherwise more supportive of the researcher's theories than they would be without the exploratory process. The magnitudes of opportunistic biases can often be stronger than those of the effects being invest… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
27
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 62 publications
0
27
0
Order By: Relevance
“…These choices are also called researcher degrees freedom (Simmons et al, 2011) in formulating hypotheses, and designing, running, analyzing, and reporting of psychological studies, and they have received considerable recent interest for two main reasons. First, researchers’ opportunistic use of them greatly increases the chances of finding a false positive result (Ioannidis, 2005; Simmons et al, 2011; DeCoster et al, 2015), or a Type I error in the language of Neyman–Pearson’s variant of null hypothesis testing (NHST). Second, their strategic use in research may inflate effect sizes (Ioannidis, 2008; Bakker et al, 2012; Simonsohn et al, 2014; van Aert et al, 2016).…”
mentioning
confidence: 99%
“…These choices are also called researcher degrees freedom (Simmons et al, 2011) in formulating hypotheses, and designing, running, analyzing, and reporting of psychological studies, and they have received considerable recent interest for two main reasons. First, researchers’ opportunistic use of them greatly increases the chances of finding a false positive result (Ioannidis, 2005; Simmons et al, 2011; DeCoster et al, 2015), or a Type I error in the language of Neyman–Pearson’s variant of null hypothesis testing (NHST). Second, their strategic use in research may inflate effect sizes (Ioannidis, 2008; Bakker et al, 2012; Simonsohn et al, 2014; van Aert et al, 2016).…”
mentioning
confidence: 99%
“…As a consequence of this selection for significance, many published effect sizes over-estimate population effect size Ioannidis, 2008;Simonsohn et al, 2014a;van Aert et al, 2016), and many published statistically significant results may constitute false-positive findings (Ioannidis, 2005b). Second, analyses of (psychological) data often involve many (often arbitrary) choices that have to be made during data analysis that researchers could use opportunistically when confronted with an (undesired) non-significant result (DeCoster et al, 2015;Ioannidis, 2005b;Nuzzo, 2015;Simmons et al, 2011;. This use may result in statistically significant findings after all, (denoted p-hacking) and may also result in overestimated effect sizes and dissemination of false positive results.…”
Section: Chapter 6 Abstractmentioning
confidence: 99%
“…Opportunistic use of researcher degrees is commonly known as 'p-hacking' (Gelman & Loken, 2013;John et al, 2012;Simmons, Nelson, & Simonsohn, 2013;Simonsohn, Nelson, & Simmons, 2014a) and is problematic for two main reasons. First, p-hacking greatly increases the chances of finding a false positive result (DeCoster, Sparks, Sparks, Sparks, & Sparks, 2015;Ioannidis, 2005b;Simmons et al, 2011). Second, it may inflate effect sizes Ioannidis, 2008;Simonsohn et al, 2014a;.…”
Section: Chaptermentioning
confidence: 99%
See 1 more Smart Citation
“…These choices are also called researcher degrees freedom (Simmons et al, 2011) in formulating hypotheses, and designing, running, analyzing, and reporting of psychological studies, and they have received considerable recent interest for two main reasons. First, researchers' opportunistic use of them greatly increases the chances of finding a false positive result (Ioannidis, 2005;Simmons et al, 2011;DeCoster et al, 2015), or a Type I error in the language of Neyman-Pearson's variant of null hypothesis testing (NHST). Second, their strategic use in research may inflate effect sizes (Ioannidis, 2008;Bakker et al, 2012;Simonsohn et al, 2014;van Aert et al, 2016).…”
mentioning
confidence: 99%