2007
DOI: 10.1016/j.neucom.2006.11.029
|View full text |Cite
|
Sign up to set email alerts
|

Challenges for large-scale implementations of spiking neural networks on FPGAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
98
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 164 publications
(98 citation statements)
references
References 37 publications
0
98
0
Order By: Relevance
“…Rotation and Phase Counter RR [0] MNT [0] RR [1] MNT [1] RR [2] MNT [2] RR [3] MNT [3] RR [4] MNT [4] RR [5] MNT [5] RR [6] MNT [6] RR [7] MNT [7] Generated spike packets are processed and forwarded in a single clock cycle. Full rotation of the spike packet on the ring ensures broadcast packet flow control.…”
Section: Fixed Latency Spike Flow-controlmentioning
confidence: 99%
See 1 more Smart Citation
“…Rotation and Phase Counter RR [0] MNT [0] RR [1] MNT [1] RR [2] MNT [2] RR [3] MNT [3] RR [4] MNT [4] RR [5] MNT [5] RR [6] MNT [6] RR [7] MNT [7] Generated spike packets are processed and forwarded in a single clock cycle. Full rotation of the spike packet on the ring ensures broadcast packet flow control.…”
Section: Fixed Latency Spike Flow-controlmentioning
confidence: 99%
“…The efficient implementation of hardware SNN architectures for real-time embedded systems is primarily influenced by neuron design, scalable on-chip interconnect architecture and SNN training/learning algorithms [7]. Packet switched Network on Chip (NoC) architectures have recently been proposed as the spike communication infrastructure for hardware SNNs, where data packets containing spike information are routed over a network of routers.…”
Section: Introductionmentioning
confidence: 99%
“…Maass has demonstrated that spiking neurons are more computationally powerful than threshold-based neuron models [24] and that SNNs possess similar and often more computation ability compared to second generation multi-layer perceptrons [25]. Other works [26][27][28] have investigated SNN hardware implementations and have found that computation in the temporal domain can be performed more efficiently in hardware compared to employing complex non-linear sigmoidal neural models. These findings, and an increasing interest in efficient temporal computation have encouraged interest in SNNs and their application to classification and control tasks.…”
Section: Spiking Neural Networkmentioning
confidence: 99%
“…In reconfigurable architectures, the model can modify the hardware configuration of the chip while the simulation is running. Either through component swapping [12] or network remapping [56], these approaches seek to circumvent scalability limitations, with some success, but with both FPGA's and GPU's scalability has proven to be the main problem, with FPGA's running into routing barriers due to their circuit-switched fabric [33] and GPU's running into memory access barriers. Even more problematic has been power consumption: a typical large FPGA may dissipate ∼ 50W and a GPU accelerator ∼ 200W.…”
Section: Adapted General-purpose Hardwarementioning
confidence: 99%