2018
DOI: 10.3389/fninf.2018.00081
|View full text |Cite
|
Sign up to set email alerts
|

Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data

Abstract: The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. For the field of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 17 publications
(47 citation statements)
references
References 24 publications
0
47
0
Order By: Relevance
“…We first adapt the neuron model to separate the numerical instability issue from the locking of spikes to a 1 ms grid, by introducing integration substeps, see also Trensch et al ( in press ) for an in-depth analysis of increasing the accuracy of integration of the Izhikevich model by this method. Thus the original configuration is simulated with 1ms resolution and one integration substep: (1.0, 1).…”
Section: Resultsmentioning
confidence: 99%
“…We first adapt the neuron model to separate the numerical instability issue from the locking of spikes to a 1 ms grid, by introducing integration substeps, see also Trensch et al ( in press ) for an in-depth analysis of increasing the accuracy of integration of the Izhikevich model by this method. Thus the original configuration is simulated with 1ms resolution and one integration substep: (1.0, 1).…”
Section: Resultsmentioning
confidence: 99%
“…In Sections 2.3 we describe in detail the particular scenario of model-to-model validation, which is the basis of a concrete worked example used for illustration during the remainder of the manuscript. In that example, we quantify the statistical difference between two implementations of the same model, namely the polychronization model (Izhikevich, 2006) and its reproduction on the SpiNNaker neuromorphic hardware system (cf., companion study Trensch et al, 2018). The models, the test statistics, and the formal workflow used for this validation are described in Section 3.…”
Section: Introductionmentioning
confidence: 95%
“…We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al, 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.…”
mentioning
confidence: 99%
“…In this paper we addressed the numerical accuracy of ODE solvers, solving a well known neuron model in fixed-and floating-point arithmetics. First, we identified that the constants in the Izhikevich neuron model should be specified explicitly by using the nearest representable number as the GCC fixed-point implementation does round-down in decimal to fixed-point conversion by default (this was also independently noticed by another study [23] but authors there chose to increase precision of the numerical format of the constants and instead of rounding the constants to the nearest representable value as we did in this work). Next, we put all constants smaller than 1 into unsigned long fract types and developed mixed-format multiplications, instead of keeping everything in accum, to maximize the accuracy.…”
Section: Discussion Further Work and Conclusionmentioning
confidence: 93%
“…Following from that, we have taken another step in this direction and have represented all of the constants smaller than 1 as u0.32 instead of s16.15, which resulted in a maximum error of 2 −32 2 . The earlier work [23] used s8.23 format for these constants, but we think that there is no downside to going all the way to u0.32 format if the constants are below 1, and any arithmetic operations involving these constants can output in a different format if more dynamic range and signed values are required. In order to support this, we have developed libraries for mixed-format multiplication operations, where s16.15 variables can be multiplied by u0.32 variables returning an s16.15 result (as described in detail in Section 4).…”
Section: (B) About Constantsmentioning
confidence: 99%