To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. To address the efficiency and fidelity with which neurons can perform such computations, we developed a theory to characterize the capacity of feedforward networks to generate desired spike sequences. We find the maximum number of desired output spikes a neuron can implement to be 0.1-0.3 per synapse. We further present a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences. We apply this framework to reconstruct synaptic weights from spiking activity and study the precision with which the temporal structure of ongoing behavior can be inferred from the spiking of premotor neurons. This work provides a powerful approach for characterizing the computational and learning capacities of single neurons and neuronal circuits.
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly.
We study the computational capacity of a model neuron, the Tempotron, which classifies sequences of spikes by linear-threshold operations. We use statistical mechanics and extreme value theory to derive the capacity of the system in random classification tasks. In contrast to its static analog, the Perceptron, the Tempotron's solutions space consists of a large number of small clusters of weight vectors. The capacity of the system per synapse is finite in the large size limit and weakly diverges with the stimulus duration relative to the membrane and synaptic time constants.PACS numbers: 87.18. Sn, 87.19.ll, 86.19.lv Neural network models of supervised learning are usually concerned with processing static spatial patterns of intensities. A famous example is learning in a singlelayer binary neuron, the Perceptron [1, 2]. However, in most neuronal systems, neural activities are in the form of time series of spikes. Furthermore, stimulus representation in some sensory systems are characterized by a small number of precisely timed spikes [3,4], suggesting that the brain possesses a machinery for extracting information embedded in the timings of spikes, not only in their overall rate. Thus, understanding the power and limitations of spike-timing based computation and learning is of fundamental importance in computational neuroscience. Gütig and Sompolinsky [5] have recently suggested a simple model, the Tempotron, for decoding information embedded in spatio-temporal spike patterns. The Tempotron is an Integrate and Fire (IF) neuron, with N input synapses of strength ω i , i = 1, . . . , N . Each input pattern is represented by N sequences of spikes, where the spike timings for the afferent i are denoted by {t i }. The membrane potential is given bywhere u(t) denotes a fixed causal temporal kernel. An example is the difference of exponentials form:, where τ m and τ s correspond, respectively, to the membrane and synaptic time constants [6]. The Tempotron fires a spike whenever U crosses the threshold, U th , from below [7] (Fig. 1a). The Tempotron performs a binary classification of its input patterns by firing one or more output spikes when presented with a 'target' (+1) pattern and remaining quiescent during a 'null' (-1) pattern.In this Letter we present a theoretical study of the computational power of the Tempotron. We focus on the standard task of classifying a batch of P = αN random patterns, where α denotes the number of patterns per input synapse. For each pattern, the timings of the input spikes from each input neuron are randomly chosen from independent Poisson processes with rate 1 T , where T is the duration of the input patterns, and the desired output, y = ±1, is randomly and independently chosen with equal probabilities. A solution to the classification problem is a set of synaptic weights {ω i } that yields a correct classification of all P patterns. We will address several fundamental questions. First, numerical simulations based on a simple error-correcting on-line learning algorithm suggest that...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.