Objective
Like all scientific research, computational neuroscience research
must be reproducible. Big data science, including simulation research,
cannot depend exclusively on journal articles as the method to provide the
sharing and transparency required for reproducibility.
Methods
Ensuring model reproducibility requires the use of multiple standard
software practices and tools, including version control, strong commenting
and documentation, and code modularity.
Results
Building on these standard practices, model sharing sites and tools
have been developed that fit into several categories: 1. standardized neural
simulators, 2. shared computational resources, 3. declarative model
descriptors, ontologies and standardized annotations; 4. model sharing
repositories and sharing standards.
Conclusion
A number of complementary innovations have been proposed to enhance
sharing, transparency and reproducibility. The individual user can be
encouraged to make use of version control, commenting, documentation and
modularity in development of models. The community can help by requiring
model sharing as a condition of publication and funding.
Significance
Model management will become increasingly important as multiscale
models become larger, more detailed and correspondingly more difficult to
manage by any single investigator or single laboratory. Additional
big data management complexity will come as the models
become more useful in interpreting experiments, thus increasing the need to
ensure clear alignment between modeling data, both parameters and results,
and experiment.