2009 International Conference on Field-Programmable Technology 2009
DOI: 10.1109/fpt.2009.5377667
|View full text |Cite
|
Sign up to set email alerts
|

A parallel spiking neural network simulator

Abstract: An FPGA-based systolic architecture for the high speed simulation of spiking neural networks is presented. The design is an implementation of Izhikevich's neuron model and employs optimizations for the typical case where neuron activity is low. Since execution time required is related to the activity level, performance of the design can be improved by an order of magnitude.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…And also it implies an approximation of the spike timestamp as high as the pipeline extension. We suppose that such an approximation do not interfere with their applications [1,4,6,10,11], but the same cannot be said to all applications.…”
Section: Resultsmentioning
confidence: 99%
“…And also it implies an approximation of the spike timestamp as high as the pipeline extension. We suppose that such an approximation do not interfere with their applications [1,4,6,10,11], but the same cannot be said to all applications.…”
Section: Resultsmentioning
confidence: 99%
“…This optimization is based on the fact that new neuron states do not need to be calculated at every single time step but only in the instances that a new input arrives at the neuron, as otherwise the neuron state would not change. To improve on their initial memory-bandwidth requirements [4], Cheung et al replaced the FPGA board with a Maxeler Dataflow Machine [5]. This FPGA-based device features state-of-the-art memory systems that increase the bandwidth capabilities greatly compared to simple FPGA boards.…”
Section: Related Workmentioning
confidence: 98%
“…Two of the most notable implementations using Izhikevich neurons are the designs proposed by Cheung et al [5,4] and by Moore et al (Bluehive [15]). Each approach proposed an FPGA architecture for very large-scale SNNs which is event-driven for optimizing the network traffic and the assorted memory-bandwidth needs.…”
Section: Related Workmentioning
confidence: 98%
“…[K. Cheung et al 2009] and [M. Ambroise et al 2013] choose to design their networks using a register transfer level (RTL) language such as VHDL. Although this has the benefit of fine control over the implementations details, it significantly reduces the overall design productivity compared with a higher-level abstraction flow.…”
Section: Related Workmentioning
confidence: 99%
“…Storage and memory resource optimization are essential considerations and often the aim is to maximize their utilization. That is the case in [K. Cheung et al 2009] that uses 18-bit and 9-bit two's complement precision to fully utilise the 18-bit two's complement multipliers of the DSP blocks available in the FPGA device. Another example is driven by memory availability; in this case [D. Thomas et al 2009] stores four synaptic weights of 9 bits in one 36-bit register in internal BRAM to maximize BRAM utilization using C as the high-level design language.…”
Section: Related Workmentioning
confidence: 99%