Computing complex spiking artificial neural networks (SANNs) on conventional hardware platforms is far from reaching real-time requirements. Therefore we propose a neuro-processor, called NeuroPipe-Chip, as part of an accelerator board. In this paper, we introduce two new concepts on chip-level to speed up the computation of SANNs. These concepts are implemented in a prototype of the NeuroPipe-Chip. We present the hardware structure of the prototype and evaluate its performance in a system simulation based on a hardware description language (HDL). For the computation of a simple SANN for image segmentation, the NeuroPipe-Chip operating at 100 MHz shows an improvement of more than two orders of magnitude compared to an Alpha 500 MHz workstation and approaches real-time requirements for the computation of SANNs in the order of 10(6) neurons. Hence, such an accelerator would allow for applications of complex SANNs to solve real-world tasks like real-time image processing. The NeuroPipe-Chip has been fabricated in an Alcatel 0.35-mum digital CMOS technology.
In this paper, we present a digital system called (SP/sup 2/INN) for simulating very large-scale spiking neural networks (VLSNNs) comprising, e.g., 1000000 neurons with several million connections in total. SP/sup 2/INN makes it possible to simulate VLSNN with features such as synaptic short term plasticity, long term plasticity as well as configurable connections. For such VLSNN the computation of the connectivity including the synapses is the main challenging task besides computing the neuron model. We describe the configurable neuron model of SP/sup 2/INN, before we focus on the computation of the connectivity. Within SP/sup 2/INN, connectivity parameters are stored in an external memory, while the actual connections are computed online based on defined connectivity rules. The communication between the SP/sup 2/INN processor and the external memory represents a bottle-neck for the system performance. We show this problem is handled efficiently by introducing a tag scheme and a target-oriented addressing method. The SP/sup 2/INN processor is described in a high-level hardware description language. We present its implementation in a 0.35 /spl mu/m CMOS technology, but also discuss advantages and drawbacks of implementing it on a field programmable gate array.
The fast simulation of large networks of spiking neurons is a major task for the examination of biology-inspired vision systems. Networks of this type label features by synchronization of spikes and there is strong demand to simulate these e ects in real world environments. As the calculations for one model neuron are complex, the digital simulation of large networks is not e cient using existing simulation systems. Consequently, it is necessary to develop special simulation techniques. This article introduces a wide range of concepts for the di erent parts of digital simulator systems for large vision networks and presents accelerators based on these foundations. Pulse-Coded Neural Vision NetworksThe communication in PCNNs is based on spike e x c hange. In contrast to conventional model neurons, e.g. McCulloch & Pitts neurons, the generation of a spike requires high computational e ort in connection with the time behaviour in the biological example. The computational e ort for individual neuron calculations compared to whole network processing is much higher in PCNNs than in conventional ANNs.Common simulation techniques for neural networks make use of vector representations for the neurons and matrix representations for the connection network 11]. These techniques are not suitable for PCNNs because the actual activity of one neuron is not representable by o n l y o n e v alue. Hence, common simulation techniques based on the acceleration of matrix-vector-calculations are not su cient for PCNNs. A new simulation paradigm is required with
We present the basic architecture of a emory Optimized Accelerator for pjking eura1 Networks (MASPINN). The accelerator architecture exploits two novel concepts for an efficient computation of spiking neural networks: weight caching and a compressed memory organization. These concepts allow a further parallelization in processing and reduce bandwidth requirements on accelerator's components. Therefore, they pave the way to dedicated digital hardware for real-time computation of more complex networks of pulse-coded neurons in the order of lO6neurons. The programmable neuron model which the accelerator is based on is described extensively. This shall encourage a discussion and suggestions on features which would be desirable to add to the current model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.