Classical experiments on spike timing-dependent plasticity (STDP) use a protocol based on pairs of presynaptic and postsynaptic spikes repeated at a given frequency to induce synaptic potentiation or depression. Therefore, standard STDP models have expressed the weight change as a function of pairs of presynaptic and postsynaptic spike. Unfortunately, those paired-based STDP models cannot account for the dependence on the repetition frequency of the pairs of spike. Moreover, those STDP models cannot reproduce recent triplet and quadruplet experiments. Here, we examine a triplet rule (i.e., a rule which considers sets of three spikes, i.e., two pre and one post or one pre and two post) and compare it to classical pair-based STDP learning rules. With such a triplet rule, it is possible to fit experimental data from visual cortical slices as well as from hippocampal cultures. Moreover, when assuming stochastic spike trains, the triplet learning rule can be mapped to a Bienenstock-Cooper-Munro learning rule.
A paradox that exists in auditory and electrosensory neural systems is that they encode behaviorally relevant signals in the range of a few microseconds with neurons that are at least one order of magnitude slower. The importance of temporal coding in neural information processing is not clear yet. A central question is whether neuronal firing can be more precise than the time constants of the neuronal processes involved. Here we address this problem using the auditory system of the barn owl as an example. We present a modelling study based on computer simulations of a neuron in the laminar nucleus. Three observations explain the paradox. First, spiking of an 'integrate-and-fire' neuron driven by excitatory postsynaptic potentials with a width at half-maximum height of 250 micros, has an accuracy of 25 micros if the presynaptic signals arrive coherently. Second, the necessary degree of coherence in the signal arrival times can be attained during ontogenetic development by virtue of an unsupervised hebbian learning rule. Learning selects connections with matching delays from a broad distribution of axons with random delays. Third, the learning rule also selects the correct delays from two independent groups of inputs, for example, from the left and right ear.
A correlation-based ͑''Hebbian''͒ learning rule at a spike level with millisecond resolution is formulated, mathematically analyzed, and compared with learning in a firing-rate description. The relative timing of presynaptic and postsynaptic spikes influences synaptic weights via an asymmetric ''learning window.'' A differential equation for the learning dynamics is derived under the assumption that the time scales of learning and neuronal spike dynamics can be separated. The differential equation is solved for a Poissonian neuron model with stochastic spike arrival. It is shown that correlations between input and output spikes tend to stabilize structure formation. With an appropriate choice of parameters, learning leads to an intrinsic normalization of the average weight and the output firing rate. Noise generates diffusion-like spreading of synaptic weights.
Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.
Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.