2019
DOI: 10.1145/3371235
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Complex Brain-Simulation Workloads on Multi-GPU Deployments

Abstract: In-silico brain simulations are the de-facto tools computational neuroscientists use to understand large-scale and complex brain-function dynamics. Current brain simulators do not scale efficiently enough to largescale problem sizes (e.g., >100,000 neurons) when simulating biophysically complex neuron models. The goal of this work is to explore the use of true multi-GPU acceleration through NVIDIA's GPUDirect technology on computationally challenging brain models and to assess their scalability. The brain mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Notably, ExaFlexHH maintains a consistent performance trend, particularly when the DFEs are fully utilized with data. In contrast, the work in Vlag et al (2019) reported a variable speedup, dropping to as low as 8× under similar simulations. Furthermore, Chatzikonstantis et al (2019) in fact reported a decrease in performance during scaling out in their experiments for the case of uniform connectivity distributions and a connectivity density of 1,000 synapses per neuron, which is much lower than that supported by ExaFlexHH.…”
Section: Hhmcgmentioning
confidence: 81%
See 1 more Smart Citation
“…Notably, ExaFlexHH maintains a consistent performance trend, particularly when the DFEs are fully utilized with data. In contrast, the work in Vlag et al (2019) reported a variable speedup, dropping to as low as 8× under similar simulations. Furthermore, Chatzikonstantis et al (2019) in fact reported a decrease in performance during scaling out in their experiments for the case of uniform connectivity distributions and a connectivity density of 1,000 synapses per neuron, which is much lower than that supported by ExaFlexHH.…”
Section: Hhmcgmentioning
confidence: 81%
“…This is not typical of multi-node setups. For illustrative purposes, we compared the performance scalability of our implementation against two other works simulating an IO network: a multi-GPU setup supporting GPUDirect, as detailed in Vlag et al (2019), and a multi-node many-core CPU architecture as described in Chatzikonstantis et al (2019). The results presented in Table 4 demonstrate superior performance scalability for ExaFlexHH.…”
Section: Hhmcgmentioning
confidence: 99%
“…By using this scheme, information is transmitted efficiently in large clusters: no information has to be exchanged between nodes that do not communicate with each other. This is a scalability improvement over existing methods, where the full matrix of connectivity degrees among nodes is gathered on all nodes (Vlag et al, 2019 ; Magalhães and Schürmann, 2020 ).…”
Section: Methodsmentioning
confidence: 99%
“…A number of GPU-based simulators are currently in use for SNN simulation, such as ANNarchy (Vitay et al, 2015), CARLsim (Niedermeier et al, 2022), BINDSnet (Hazan et al, 2018), GeNN, and NEST GPU (Golosio et al, 2021). These use different design principles (Brette and Goodman, 2012;Vlag et al, 2019) that are optimal for specific use cases. NEST GPU, for example, follows the design principle of NEST allowing for the distributed simulation of very large networks on multiple GPUs (on multiple machines) using MPI.…”
Section: Limitations Of the Present Studymentioning
confidence: 99%