2006 International Conference on Field Programmable Logic and Applications 2006
DOI: 10.1109/fpl.2006.311352
|View full text |Cite
|
Sign up to set email alerts
|

Area Efficient Architecture for Large Scale Implementation of Biologically Plausible Spiking Neural Networks on Reconfigurable Hardware

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 3 publications
0
14
0
Order By: Relevance
“…For an area-efficient implementation of a neural reservoir on reconfigurable hardware, it is necessary that the use of area hungry operators such as multipliers are minimised or completely avoided [13,14]. In traditional modelling of synapses, inputs are multiplied with fixed weights and because of this multiplication the number of multipliers increases with an increased number of synapses.…”
Section: Reconfigurable Architecture For Reservoir Implementationmentioning
confidence: 99%
See 1 more Smart Citation
“…For an area-efficient implementation of a neural reservoir on reconfigurable hardware, it is necessary that the use of area hungry operators such as multipliers are minimised or completely avoided [13,14]. In traditional modelling of synapses, inputs are multiplied with fixed weights and because of this multiplication the number of multipliers increases with an increased number of synapses.…”
Section: Reconfigurable Architecture For Reservoir Implementationmentioning
confidence: 99%
“…It exploits previously published techniques, namely area efficient multiplier-less architecture, which overcomes the burden of multipliers required for synaptic multiplications [13,14]. To investigate the viability of implementing RC paradigm on HW/SW platform, this work presents area efficient spiking neurons architectures.…”
Section: Introductionmentioning
confidence: 98%
“…Several authors have examined the acceleration of neural network processing through FPGAs [20][21][22][23], customintegrated circuits [24], and parallel computation [13]. The FPGA designs in Refs.…”
Section: Related Workmentioning
confidence: 99%
“…[21][22][23] implemented feedforward fully connected neural networks, while the design we implement is a Baysian network that operates through both feed-forward and feed-back belief propagations. Ghani et al [21] proposed a multiplier-less circuit for spiking neural networks. Their design is very area efficient, and allows large-scale FPGA-based implementations of spiking neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…The ability to reconfigure FPGA logic blocks and interconnect has attracted researchers to explore the mapping of SNNs to FPGAs [5][6][7][8][9]. Efficient, low-area/power implementations of synaptic junctions and neuron interconnect are key to scalable SNN hardware implementations.…”
Section: Introductionmentioning
confidence: 99%