2020
DOI: 10.1016/j.envsoft.2020.104873
|View full text |Cite
|
Sign up to set email alerts
|

On code sharing and model documentation of published individual and agent-based models

Abstract: Being able to replicate research results is the hallmark of science. Replication of research findings using computational models should, in principle, be possible. In this manuscript, we assess code sharing and model documentation practices of 7500 publications about individual-based and agent-based models. The code availability increased over the years, up to 18% in 2018. Model documentation does not include all the elements that could improve the transparency of the models, such as mathematical eq… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(16 citation statements)
references
References 23 publications
0
16
0
Order By: Relevance
“…As shown in previous calls for transparency in COVID-19 modeling, modelers do not systematically provide their code [101] . In a transparency assessment, Jalali and colleagues found that most models do not share their code [102] , which echoes similar observations about practices in Agent-Based Modeling across application domains [103,104] . Our criteria thus meant that we could only assess a subset of existing models and it is possibly that different trends or initial error levels are observed in other models.…”
Section: Discussionmentioning
confidence: 56%
“…As shown in previous calls for transparency in COVID-19 modeling, modelers do not systematically provide their code [101] . In a transparency assessment, Jalali and colleagues found that most models do not share their code [102] , which echoes similar observations about practices in Agent-Based Modeling across application domains [103,104] . Our criteria thus meant that we could only assess a subset of existing models and it is possibly that different trends or initial error levels are observed in other models.…”
Section: Discussionmentioning
confidence: 56%
“…Yet, a recent study of nearly 8,000 articles on model-based research from 1990 through 2018, listed in ISI Web of Science, found that a majority do not make the model code available ( Fig. 1 ) ( 2 ). Even for the most recent articles in the study, more than 80% do not provide access to the model code.…”
Section: Models and Open Sciencementioning
confidence: 99%
“…found that most models do not share their code, [ 106 ] which echoes similar observations about practices in agent‐based modeling across application domains. [ 107,108 ] Our criteria thus meant that we could only assess a subset of existing models and it is possible that different trends or initial error levels are observed in other models. We note that projects that shared their code were also transparent regarding how their computational results were produced, [ 109 ] hence we were able to perform verification and we applied the same level of transparency when conveying the model's parameters.…”
Section: Discussionmentioning
confidence: 99%