2016
DOI: 10.1371/journal.pone.0149794
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian Perspective on the Reproducibility Project: Psychology

Abstract: We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

8
250
0
9

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 281 publications
(267 citation statements)
references
References 29 publications
8
250
0
9
Order By: Relevance
“…57-59). According to a Bayesian reanalysis of the Reproducibility Project, one pair of authors argued that "the apparent discrepancy between the original set of results and the outcome of the Reproducibility Project can be explained adequately by the combination of deleterious publication practices and weak standards of evidence, without recourse to hypothetical hidden moderators" (60). However, this paper did not directly code or analyze contextual sensitivity in any systematic way.…”
mentioning
confidence: 97%
“…57-59). According to a Bayesian reanalysis of the Reproducibility Project, one pair of authors argued that "the apparent discrepancy between the original set of results and the outcome of the Reproducibility Project can be explained adequately by the combination of deleterious publication practices and weak standards of evidence, without recourse to hypothetical hidden moderators" (60). However, this paper did not directly code or analyze contextual sensitivity in any systematic way.…”
mentioning
confidence: 97%
“…On top of that, many available tools for professionals and students are either overpriced, too complex (i.e., displaying vast amounts of raw information neither demanded nor needed by the user) or too basic (i.e., not supporting advanced statistical procedures). These factors contribute to the reproducibility crisis in psychological science (Chambers et al 2014, Etz andVandekerckhove (2016), Szucs and Ioannidis (2016)). …”
mentioning
confidence: 99%
“…If H 0 :H 1 odds = 6 then FRP will be 67.92%. Looking at these numbers the replication crisis does not seem surprising: using NHST very high FRP can be expected even with modestly high H 0 :H 1 odds and moderate bias (Etz and Vandekerckhove, 2016). Hence, under realistic conditions FRP not only extremely rarely equals α or the p-value (and TRP extremely rarely equals 1-α and/or 1-p-value) but also, FRP is much larger than the generally assumed 5% and TRP is much lower than the generally assumed 95%.…”
mentioning
confidence: 99%