2018
DOI: 10.1073/pnas.1708290115
|View full text |Cite
|
Sign up to set email alerts
|

An empirical analysis of journal policy effectiveness for computational reproducibility

Abstract: A key component of scientific communication is sufficient information for other researchers in the field to reproduce published findings. For computational and data-enabled research, this has often been interpreted to mean making available the raw data from which results were generated, the computer code that generated the findings, and any additional information needed such as workflows and input parameters. Many journals are revising author guidelines to include data and code availability. This work evaluate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

6
284
3
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 279 publications
(297 citation statements)
references
References 23 publications
6
284
3
4
Order By: Relevance
“…An evaluation of QSP models published in CPT:PSP found that only 4/12 evaluated were executable, in that figures from the associated manuscript could be generated from the materials provided . Looking beyond QSP, an analysis of 204 computational models published in Science similarly found that results from only 26% could be replicated . Provision of simple "readme" and "run" files would help alleviate this problem—namely, a text file that explicitly describes the code, accompanied by a single script that loads and simulates the model to generate at least one figure from the manuscript.…”
Section: Model Transparencymentioning
confidence: 99%
“…An evaluation of QSP models published in CPT:PSP found that only 4/12 evaluated were executable, in that figures from the associated manuscript could be generated from the materials provided . Looking beyond QSP, an analysis of 204 computational models published in Science similarly found that results from only 26% could be replicated . Provision of simple "readme" and "run" files would help alleviate this problem—namely, a text file that explicitly describes the code, accompanied by a single script that loads and simulates the model to generate at least one figure from the manuscript.…”
Section: Model Transparencymentioning
confidence: 99%
“…Factors contributing to the current low rate of data and code sharing with newly developed methods include an absence of journal policies requiring the public sharing of these resources and infrastructural challenges to sharing large data generated by the benchmarking studies 53 .…”
Section: Availability Of Benchmarking Data and Supporting Documentationmentioning
confidence: 99%
“…While the vast majority of surveyed benchmarking studies are widely disseminated benchmarking data, only 42% of the studies surveyed completely shared benchmarking data (Table 1). Most studies adopted the 'shared upon request' model, which is a less reliable and less reproducible method of data dissemination as it relies on individual authors' availability to perpetually share data 53,54 .…”
Section: Availability Of Benchmarking Data and Supporting Documentationmentioning
confidence: 99%
“…Anyone who has tried to recreate a model and replicate a simulation from an article may not be so sure. Indeed, the reproducibility of published computational research has been reported as similarly dismal (~25%) . The barriers to computational reproducibility include a lack of standardization for building and representing models, a lack of documentation on usage, and a lack of transparency (sometimes intentional) by authors in addition to simple coding errors and typos.…”
Section: Introductionmentioning
confidence: 99%