For a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perform experiments for the classical XOR-problem, when posed in a temporal setting, as well as for a number of other benchmark datasets. Comparing the (implicit) number of spiking neurons required for the encoding of the interpolated XOR problem, it is demonstrated that temporal coding requires significantly less neurons than instantaneous rate-coding.2000 Mathematics Subject Classification: 82C32, 68T05, 68T10, 68T30, 92B20. 1998 ACM Computing Classification System: C.1.3, F.1.1, I.2.6, I.5.1. Keywords and Phrases: spiking neurons; temporal coding; error-backpropagation Note: Work carried out under theme SEN4 "Evolutionary Systems and Applied Algorithmics". This paper has been submitted for publication, a short version has been presented at the European Symposium on Artificial Neural Networks 2000 (ESANN'2000) in Bruge, Belgium. IntroductionDue to its success in artificial neural networks, the sigmoidal neuron is reputed to be a successful model of biological neuronal behavior. By modeling the rate at which a single biological neuron discharges action potentials (spikes) as a monotonically increasing function of inputmatch, many useful applications of artificial neural networks have been build [22; 7; 37; 34] and substantial theoretical insights in the behavior of connectionist structures have been obtained [40; 27].However, the spiking nature of biological neurons has recently led to explorations of the computational power associated with temporal information coding in single spikes [31; 21; 13; 26; 20; 17; 49]. In [32] it was proven that networks of spiking neurons can simulate arbitrary feedforward sigmoidal neural nets and can thus approximate any continuous function. In fact, it has been shown theoretically that spiking neural networks that convey information by individual spike times are computationally more powerful than neurons with sigmoidal activation functions [29].As spikes can be described by 'event' coordinates (place,time) and the number of active (spiking) neurons is typically sparse, artificial spiking neural networks have been shown to allow for very efficient implementations of large neural networks [48; 33]. Single-spike-time computing has also been suggested as a new paradigm for VLSI neural network implementations [28] and would offer a drastic speed-up.Network architectures based on spiking neurons that encode information in the individual spike times have yielded, amongst others, a self-organizing map akin to Kohonen's SOM [39] and a network
Spiking Neuron Networks (SNNs) are often referred to as the 3 rd generation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike firing. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (general-purpose, easy-to-use, available simulators, etc.) of traditional connectionist models. This chapter relates the history of the "spiking neuron" in Section 1 and summarizes the most currently-in-use models of neurons and synaptic plasticity in Section 2. The computational power of SNNs is addressed in Section 3 and the problem of learning in networks of spiking neurons is tackled in Section 4, with insights into the tracks currently explored for solving it. Finally, Section 5 discusses application domains, implementation issues and proposes several simulation frameworks.
We demonstrate that spiking neural networks encoding information in the timing of single spikes are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on real-world data, and we demonstrate how temporal synchrony in a multilayer network can induce hierarchical clustering. We develop a temporal encoding of continuously valued data to obtain adjustable clustering capacity and precision with an efficient use of neurons: input variables are encoded in a population code by neurons with graded and overlapping sensitivity profiles. We also discuss methods for enhancing scale-sensitivity of the network and show how the induced synchronization of neurons within early RBF layers allows for the subsequent detection of complex clusters.
Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.