2020
DOI: 10.1007/978-3-030-61616-8_49
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 25 publications
1
5
0
Order By: Relevance
“…As the work of Ostrau et al ( 2020 ) shows some similarities to our work, it is noteworthy that they pre-learn several conventional ANNs which are transformed into SNNs. It is shown that the SpiNNaker system can be efficient if it is used to its full extend, whereas the GeNN implementations are most suitable if one focuses primarily on short simulation times.…”
Section: Introductionsupporting
confidence: 66%
See 3 more Smart Citations
“…As the work of Ostrau et al ( 2020 ) shows some similarities to our work, it is noteworthy that they pre-learn several conventional ANNs which are transformed into SNNs. It is shown that the SpiNNaker system can be efficient if it is used to its full extend, whereas the GeNN implementations are most suitable if one focuses primarily on short simulation times.…”
Section: Introductionsupporting
confidence: 66%
“…However, as noted in Diamond et al ( 2016 ) this implies the risk that individual strengths of the simulators are not accounted for. Despite this issue, in this paper, all SNNs are modeled with PyNN, whereas Ostrau et al ( 2020 ) opted to use the Cypress library and Diamond et al ( 2016 ) decided to model their networks in the native modeling languages of the simulators instead of using their PyNN interface. However, using PyNN to model all networks allows to simulate exactly the same model on all backends.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A third benchmark is the conversion of deep neural networks to SNNs (Ostrau et al, 2020a ). For this conversion, DNNs are trained using ReLU activation functions without biases, and for simplicity, only densely connected layers.…”
Section: Methodsmentioning
confidence: 99%