2022
DOI: 10.3389/fninf.2022.883700
|View full text |Cite
|
Sign up to set email alerts
|

Brian2CUDA: Flexible and Efficient Simulation of Spiking Neural Network Models on GPUs

Abstract: Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Pyth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…Notably, the proposed approach requires a relatively small number of free parameters, resulting in straightforward model development and calibration. Another advantage of our implementation is its compatibility with all popular operating systems running on CPUs and GPUs 72,73 . Finally, our approach allows testing new algorithms compatible with neuromorphic hardware [88][89][90] , which has seen impressive resource-saving benefits by including dendrites 91 .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Notably, the proposed approach requires a relatively small number of free parameters, resulting in straightforward model development and calibration. Another advantage of our implementation is its compatibility with all popular operating systems running on CPUs and GPUs 72,73 . Finally, our approach allows testing new algorithms compatible with neuromorphic hardware [88][89][90] , which has seen impressive resource-saving benefits by including dendrites 91 .…”
Section: Discussionmentioning
confidence: 99%
“…6). It is important to note that since simulation performance depends on multiple factors such as model complexity, hardware specifications, and case-specific optimizations (e.g., C++ code generation 42 or GPU acceleration 72,73 ), designing a single most representative test is unrealistic. For the sake of simplicity and to replicate a real-world usage scenario, all simulations presented in this section were performed on an average laptop using standard and widely used Python tools (Supplementary Table 4).…”
Section: Scalability Analysismentioning
confidence: 99%
“…Advances in GPU computing and strong interest in neuromorphic computing have led to various efficient implementations of spiking neural networks. Recent work that implements simulations of spiking neural networks in GPUs include the following: a code-generation based system that generates CUDA code for GPU (GeNN [28]) and a popular Python-based simulator for spiking neural networks, Brian2, extended for generating CUDA code directly (Brian2CUDA [29]) or through GeNN (Brian2GeNN [30]). Similarly, highly efficient CPU-based simulations of spiking neural networks can be implemented in NEST (NEural Simulation Tool [31]).…”
Section: Discussionmentioning
confidence: 99%
“…6). It is important to note that since simulation performance depends on multiple factors such as model complexity, hardware specifications, and case-specific optimizations (e.g., C++ code generation 42 or GPU acceleration 72,73 ), designing a single most representative test is unrealistic.…”
Section: Scalability Analysismentioning
confidence: 99%