2015
DOI: 10.1177/1948550615598377
|View full text |Cite
|
Sign up to set email alerts
|

Underreporting in Psychology Experiments

Abstract: Many scholars have raised concerns about the credibility of empirical findings in psychology, arguing that the proportion of false positives reported in the published literature dramatically exceeds the rate implied by standard significance levels. A major contributor of false positives is the practice of reporting a subset of the potentially relevant statistical analyses pertaining to a research project. This study is the first to provide direct evidence of selective underreporting in psychology experiments. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
106
2

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 115 publications
(114 citation statements)
references
References 22 publications
3
106
2
Order By: Relevance
“…This is a practice that a substantial proportion of psychologists admitted to (Agnoli et al, 2017;John et al, 2012; see also http://psychdisclosure.org). This practice has also been observed directly, when comparing published and unpublished results in a known population of conducted studies (Chan, Hrobjartsson, Haahr, Gotzsche, & Altman, 2004;Franco et al, 2016;O'Boyle, Banks, & Gonzalez-Mule, 2017).…”
Section: Solutionsmentioning
confidence: 58%
See 1 more Smart Citation
“…This is a practice that a substantial proportion of psychologists admitted to (Agnoli et al, 2017;John et al, 2012; see also http://psychdisclosure.org). This practice has also been observed directly, when comparing published and unpublished results in a known population of conducted studies (Chan, Hrobjartsson, Haahr, Gotzsche, & Altman, 2004;Franco et al, 2016;O'Boyle, Banks, & Gonzalez-Mule, 2017).…”
Section: Solutionsmentioning
confidence: 58%
“…This becomes even more problematic, if only the analyses that "worked" are reported and presented as if they were planned from the start (Kerr, 1998;Wagenmakers, Wetzels, Borsboom, Maas, & Kievit, 2012). Survey results show that many psychologists admit to such "questionable research practices" (QRPs; Agnoli, Wicherts, Veldkamp, Albiero, & Cubelli, 2017;John et al, 2012), and use of study registers and later disclosures by researchers provide direct evidence that indeed some of these practices are quite common (Franco, Malhotra, & Simonovits, 2016;LeBel et al, 2013).…”
Section: Bias and Errorsmentioning
confidence: 99%
“…Among the research practices studied by John et al, this practice of not reporting all dependent measures showed quite high prevalence estimates (John et al, 2012). Additionally, direct evidence indicates that psychological researchers often choose among different outcome measures (Franco et al, 2016;LeBel, Borsboom, Giner-Sorolla, Hasselman, Peters, Ratliff, & Smith, 2013). In the medical literature on randomized clinical trials, this researcher DF is often called outcome switching and the bias it introduces is called outcome reporting bias (Chan, Hrobjartsson, Haahr, Gotzsche, & Altman, 2004;Kirkham et al, 2010;Weston et al, 2016).…”
Section: Dependent Variable(s)mentioning
confidence: 99%
“…Dropping of experimental conditions has been found to be quite common in a survey among psychological researchers (John et al, 2012) and in a study that considered psychological studies from a register (Franco, Malhotra, & Simonovits, 2016). Specifically, a researcher can discard a factor in a multifactorial experiment by pooling the data over the levels of that factor, or the researcher can select certain levels of a discarded factor.…”
Section: Design Phasementioning
confidence: 99%
See 1 more Smart Citation