The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays 2021
DOI: 10.1145/3431920.3439283
|View full text |Cite
|
Sign up to set email alerts
|

S2N2: A FPGA Accelerator for Streaming Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(20 citation statements)
references
References 35 publications
0
20
0
Order By: Relevance
“…The PEs are highly pipelined to achieve a high clock frequency and to improve parallelism. In contrast HLS-based approaches like the one of Fang et al [8] or S2N2 [39], our approach is agnostic to the CSNN's architecture and can thus be implemented on an ASIC as well. As the hardware utilization of a single convolution unit is so small, mutliple convolution units can be implemented in parallel, allowing easy scaling of throughput.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The PEs are highly pipelined to achieve a high clock frequency and to improve parallelism. In contrast HLS-based approaches like the one of Fang et al [8] or S2N2 [39], our approach is agnostic to the CSNN's architecture and can thus be implemented on an ASIC as well. As the hardware utilization of a single convolution unit is so small, mutliple convolution units can be implemented in parallel, allowing easy scaling of throughput.…”
Section: Discussionmentioning
confidence: 99%
“…In the future we plan to implement larger SNNs and also compare our results to non-spiking implementations. ASIC 0.001 98.0 SIES [18] FPGA 99.2 S2N2 [39] FPGA 98.5…”
Section: Discussionmentioning
confidence: 99%
“…However, their hardware design occupies many resources, making it difficult to be deployed on devices with small footprints. S2N2 [34], on the other hand, is a SIMD architecture with a high resource efficiency. The architecture was also evaluated on a two-dimensional image dataset.…”
Section: Related Workmentioning
confidence: 99%
“…Various accelerators have recently been proposed, which can be divided into two approaches based on their supported topologies [1]: general mesh and feedforward. For general mesh topology, large-scale neuromorphic hardware systems such as TrueNorth [2], Loihi [3], SpiNNaker [4], BrainScaleS [5], ODIN [6], µBrain [7] and DYNAPs [8] support a mesh of neurons with no particular topology by advanced routers and schedulers.…”
Section: Introductionmentioning
confidence: 99%