2020
DOI: 10.1101/2020.04.27.063693
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Larger GPU-accelerated brain simulations with procedural connectivity

Abstract: Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 10 12 synaptic connections which, in simulations, are each typically charaterized by at least one floatingpoint value. This amounts to several terabytes of data -an unrealistic memory requirement for a single desktop machine. Large models are therefore typically simulated on distrib… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(28 citation statements)
references
References 34 publications
0
28
0
Order By: Relevance
“…The multi-area model of monkey cortex developed by Schmidt et al [11,12] and described here has a somewhat higher threshold for reuse, due to its greater complexity and specificity. Nevertheless, it has already been ported to a single GPU using connectivity generated on the fly each time a spike is triggered, thereby trading memory storage and retrieval for computation, which is possible in this case because the synapses are static [52]. We hope that the technologies presented here push the complexity barrier of neuroscience modeling a bit further out, such that the model will find a wide uptake and serve as a scaffold for generating an ever more complete and realistic picture of cortical structure, dynamics, and function.…”
Section: Discussionmentioning
confidence: 99%
“…The multi-area model of monkey cortex developed by Schmidt et al [11,12] and described here has a somewhat higher threshold for reuse, due to its greater complexity and specificity. Nevertheless, it has already been ported to a single GPU using connectivity generated on the fly each time a spike is triggered, thereby trading memory storage and retrieval for computation, which is possible in this case because the synapses are static [52]. We hope that the technologies presented here push the complexity barrier of neuroscience modeling a bit further out, such that the model will find a wide uptake and serve as a scaffold for generating an ever more complete and realistic picture of cortical structure, dynamics, and function.…”
Section: Discussionmentioning
confidence: 99%
“…Because the models are very close dynamically and structurally, it is a good way to evaluate roughly the memory and computational performances of corresponding simulators and hardware solutions. Indeed, simulations in [26, 46, 24] use LIF models that are dynamically very close to the Hawkes model used here (see Section 2.1). Structurally, we compare networks of similar size and connectivity.…”
Section: Discussionmentioning
confidence: 99%
“…Table 1) and both computational and memory complexities of their solutions. However, in [26, 46, 24] we were not able to find the mean firing network rate and there is no computational complexity except in [24], where only the memory complexity is provided. Still these simulations are meant to be biologically plausible and therefore should have been run on realistic parameter ranges.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Originally, the idea of event-driven connectivity generation has been proposed in the case of abstract neurons for which spike timing is exactly known, i.e., rule-based artificial cell units, or finite state machines (Lytton and Stewart, 2006 ). This approach has then been applied with integrate-and-fire (IF) neurons, i.e., quadratic IF (Izhikevich and Edelman, 2008 ) and leaky IF over GPU hardware (Knight and Nowotny, 2020 ).…”
Section: Related Workmentioning
confidence: 99%