2020
DOI: 10.1098/rsos.200566
|View full text |Cite
|
Sign up to set email alerts
|

Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme

Abstract: The Defense Advanced Research Projects Agency (DARPA) programme ‘Systematizing Confidence in Open Research and Evidence' (SCORE) aims to generate confidence scores for a large number of research claims from empirical studies in the social and behavioural sciences. The confidence scores will provide a quantitative assessment of how likely a claim will hold up in an independent replication. To create the scores, we follow earlier approaches and use prediction markets and surveys to forecast replication outcomes.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
34
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 43 publications
(35 citation statements)
references
References 25 publications
1
34
0
Order By: Relevance
“…The data and results presented in this paper can be used for future forecasting projects that are either planned or in progress [ 14 ], by informing experimental design and forecasting aggregation. The results can also be used to evaluate the predictive performance of prediction markets against other methods [ 33 , 34 , 40 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The data and results presented in this paper can be used for future forecasting projects that are either planned or in progress [ 14 ], by informing experimental design and forecasting aggregation. The results can also be used to evaluate the predictive performance of prediction markets against other methods [ 33 , 34 , 40 ].…”
Section: Discussionmentioning
confidence: 99%
“…Concerns have been raised about the credibility of published results following John Ioannidis’ landmark essay, “Why most published findings are false” [ 3 ], and the identification of a considerable number of studies that turned out to be false positives [ 4 , 5 ]. In response, several large-scale replication projects were initiated in the fields of psychology, experimental economics, and the social sciences more generally [ 6 14 ] to systematically evaluate a large sample of findings through direct replication. The rate of successful replication (commonly defined as a result with a statistically significant effect size in the same direction as the original effect) in these projects ranges from 36% to 62%.…”
Section: Introductionmentioning
confidence: 99%
“…People disagree about which degree of replicability should be expected from the published literature (Gordon et al, 2020). To provide empirical input for these discussions, we provide a descriptive summary of recent evidence concerning replicability in psychology.…”
Section: The State Of Replicability Of Psychological Sciencementioning
confidence: 99%
“…Scholars working on this important topic have, for several decades now, continued to find significant room for disagreement on key issues of sampling, data sources, and statistical analysis methods, 20 matters on which well-intentioned researchers may well disagree. In this and other similarly contentious debates, we believe that the use of prespecified research designs and analysis plans could be useful for advancing scientific progress, a point we return to below.…”
Section: Extensionmentioning
confidence: 99%