1999
DOI: 10.1117/12.343072
|View full text |Cite
|
Sign up to set email alerts
|

<title>MASPINN: novel concepts for a neuroaccelerator for spiking neural networks</title>

Abstract: We present the basic architecture of a emory Optimized Accelerator for pjking eura1 Networks (MASPINN). The accelerator architecture exploits two novel concepts for an efficient computation of spiking neural networks: weight caching and a compressed memory organization. These concepts allow a further parallelization in processing and reduce bandwidth requirements on accelerator's components. Therefore, they pave the way to dedicated digital hardware for real-time computation of more complex networks of pulse-c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
8
0

Year Published

2000
2000
2011
2011

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…In order to achieve real-time computation of very complex SANNs we proposed an accelerator system called memory optimized accelerator for spiking neural networks (MASPINN), which is an accelerator board connected to a host computer via a PCI-bus [13]. As shown in Fig.…”
Section: Accelerator System For Spiking Neural Network: Maspinnmentioning
confidence: 99%
“…In order to achieve real-time computation of very complex SANNs we proposed an accelerator system called memory optimized accelerator for spiking neural networks (MASPINN), which is an accelerator board connected to a host computer via a PCI-bus [13]. As shown in Fig.…”
Section: Accelerator System For Spiking Neural Network: Maspinnmentioning
confidence: 99%
“…Several architectures have already been presented, ranging from large systems [13], [14], [15] able to process very large networks (> 10, 000 of neurons) at many times real-time speed, to compact designs where small networks are directly mapped onto hardware [6], [16], [17], [18] using small neuron processing elements (PEs). These implementations span a small part of a larger design space which allows to make a trade-off between area (chip area and memory footprint) and calculation time.…”
Section: Introductionmentioning
confidence: 99%
“…(Schoenauer et al, 1998). The key concepts of an efficient mapping are load balancing over an array of processing elements, minimizing inter processing element communication and minimizing synchronization between the processing elements (PEs).…”
Section: Mapping Neural Network On Parallel Computersmentioning
confidence: 99%