2016
DOI: 10.31219/osf.io/k4bgq
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability

Abstract: An academic scientist’s professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulatio… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

14
814
1
14

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 604 publications
(843 citation statements)
references
References 100 publications
(145 reference statements)
14
814
1
14
Order By: Relevance
“…The psychological sciences are seeing today a revival of concern in the robustness of our results, and particularly to what extent published results are indeed replicable. The problem likely emerges from the fact that the current reward scheme pushes researchers, reviewers, and editors to value more results where there is a p-value below the .05 threshold than results that are not significant (see e.g., Ioannidis, 2005;Nosek, Spies, & Motyl, 2012 for the general argument, and Open Science Collaboration, 2015 for a recent set of results in psychology). As a consequence, there is an over-representation of significant results in the literature, and quite likely an increase in the number of false positives that are present in published studies (Sterling, Rosenbaum, & Weinkam, 1995).…”
Section: Why Carry Out a Meta-analysis?mentioning
confidence: 99%
“…The psychological sciences are seeing today a revival of concern in the robustness of our results, and particularly to what extent published results are indeed replicable. The problem likely emerges from the fact that the current reward scheme pushes researchers, reviewers, and editors to value more results where there is a p-value below the .05 threshold than results that are not significant (see e.g., Ioannidis, 2005;Nosek, Spies, & Motyl, 2012 for the general argument, and Open Science Collaboration, 2015 for a recent set of results in psychology). As a consequence, there is an over-representation of significant results in the literature, and quite likely an increase in the number of false positives that are present in published studies (Sterling, Rosenbaum, & Weinkam, 1995).…”
Section: Why Carry Out a Meta-analysis?mentioning
confidence: 99%
“…A simple answer is that there are pragmatic barriers to sharing and few incentives to overcome them [15]. The present academic culture emphasizes publications and grants as researchers' primary incentives.…”
Section: Introductionmentioning
confidence: 99%
“…We also recognize that the sample size for Study 1 was small, and the study is likely underpowered. However, we believe it important to share all data with the scientific community, especially in light of current debates about transparency and scientific openness [44] 6 . Our measure of physical and emotional satisfaction may also be improved upon.…”
Section: Discussionmentioning
confidence: 99%