This paper describes the design of a massively parallel computer that is suitable for computational neuroscience modeling of large-scale spiking neural networks in biological real time.
This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.This paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture. Unlike existing hierarchical architectures for pattern recognition, the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene. These dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors. Similarly to cortical structures, subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows. The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood. We demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model. First layer feature units operate on groups of pixels, while subsequent layer feature units operate on the output of lower level feature units. We report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task, achieving near 100 percent accuracy on each. We introduce a new seven class moving face recognition task, achieving 79 percent accuracy.
etinitis pigmentosa (RP) is a progressive, inherited, monogenic or rarely digenic 1 blinding disease caused by mutations in more than 71 different genes (https://sph.uth.edu/retnet/ sum-dis.htm). It affects more than 2 million people worldwide. With the exception of a gene replacement therapy for one form of early-onset RP caused by mutation in the gene RPE65 (ref. 2 ), there is no approved therapy for RP.Optogenetic vision restoration 3-5 is a mutation-independent approach for restoring visual function at the late stages of RP after vision is lost [6][7][8][9] . The open-label phase 1/2a PIONEER study (ClinicalTrials.gov identifier: NCT03326336; the clinical trial protocol is provided in the Supplementary Text) was designed to evaluate the safety (primary objective) and efficacy (secondary objective) of an investigational treatment for patients with advanced nonsyndromic RP that combines injection of an optogenetic vector (GS030-Drug Product (GS030-DP)) with wearing a medical device, namely light-stimulating goggles (GS030-Medical Device (GS030-MD)). The proof of concept for GS030-DP and the GS030-DP dose used in the PIONEER clinical trial were established in nonhuman primate studies 10,11 .The optogenetic vector, a serotype 2.7m8 (ref. 12 ) adenoassociated viral vector encoding the light-sensing channelrhodopsin protein ChrimsonR fused to the red fluorescent protein tdTomato 13 , was administered by a single intravitreal injection into the worse-seeing eye to target mainly foveal retinal ganglion cells 10 . The fusion protein tdTomato was included to increase the expression of ChrimsonR in the cell membrane 10 . The peak sensitivity of ChrimsonR-tdTomato is around 590 nm (amber color) 13 . We chose ChrimsonR, which has one of the most red-shifted action spectra among the available optogenetic sensors because amber light is safer and causes less pupil constriction 10 than the blue light used to activate many other sensors. The light-stimulating goggles capture images from the visual world using a neuromorphic camera that detects changes in intensity, pixel by pixel, as distinct events 14 . The goggles then transform the events into monochromatic images and project them in real time as local 595-nm light pulses onto the retina (Extended Data Fig. 1). Results Safety of the optogenetic vector and light-stimulating goggles.In this article, we describe the partial recovery of vision in one participant of the PIONEER study. At the inclusion in the study, this 58-year-old male, who was diagnosed with RP 40 years ago, had a visual acuity limited to light perception. The worse-seeing eye was treated with 5.0 × 10 10 vector genomes of optogenetic vector. Both before and after the injection, we performed ocular examinations and assessed the anatomy of the retina based on optical coherence tomography images, color fundus photographs and fundus autofluorescence images taken on several occasions over 15 visits spanning 84 weeks according to the protocol (Extended Data Fig. 2). We monitored potential intraocular inflammation a...
The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker-Spiking Neural Network architecture-is a massively parallel computer system designed to provide a cost-effective and flexible simulator for neuroscience experiments. It can model up to a billion neurons and a trillion synapses in biological real time. The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure. In this paper, we review the design requirements for its very demanding target application, the SpiNNaker micro-architecture and its implementation issues. We also evaluate the SpiNNaker CMP, which contains 100 million transistors in a 102-mm die, provides a peak performance of 3.96 GIPS, and has a peak power consumption of 1 W when all processor cores operate at the nominal frequency of 180 MHz. SpiNNaker chips are fully operational and meet their power and performance requirements.Index Terms-Asynchronous interconnect, chip multiprocessor, energy efficiency, globally asynchronous locally synchronous (GALS), network-on-chip, neuromorphic hardware, real-time simulation, spiking neural networks (SNNs).
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.