2005
DOI: 10.1073/pnas.0500495102
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Bienenstock–Cooper–Munro rule for spiking neurons that maximizes information transmission

Abstract: Maximization of information transmission by a spiking-neuron model predicts changes of synaptic connections that depend on timing of pre-and postsynaptic spikes and on the postsynaptic membrane potential. Under the assumption of Poisson firing statistics, the synaptic update rule exhibits all of the features of the Bienenstock-Cooper-Munro rule, in particular, regimes of synaptic potentiation and depression separated by a sliding threshold. Moreover, the learning rule is also applicable to the more realistic c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
154
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(156 citation statements)
references
References 33 publications
2
154
0
Order By: Relevance
“…As an alternative, various researchers have proposed different ways to exploit recent advances in neuroscience about synaptic plasticity [1], especially IP 2 [10,9] or STDP 3 [28,19], that is usually presented as the Hebb rule, revisited in the context of temporal coding. A current trend is to propose computational justifications for plasticity-based learning rules, in terms of entropy minimization [5] as well as log-likelihood [35] or mutual information maximization [8,46,7]. However, since STDP is a local unsupervised rule for adapting the weights of connections, such a synaptic plasticity is not efficient enough for controlling the behavior of an SNN in the context of a given task.…”
Section: Spiking Neuron Networkmentioning
confidence: 99%
“…As an alternative, various researchers have proposed different ways to exploit recent advances in neuroscience about synaptic plasticity [1], especially IP 2 [10,9] or STDP 3 [28,19], that is usually presented as the Hebb rule, revisited in the context of temporal coding. A current trend is to propose computational justifications for plasticity-based learning rules, in terms of entropy minimization [5] as well as log-likelihood [35] or mutual information maximization [8,46,7]. However, since STDP is a local unsupervised rule for adapting the weights of connections, such a synaptic plasticity is not efficient enough for controlling the behavior of an SNN in the context of a given task.…”
Section: Spiking Neuron Networkmentioning
confidence: 99%
“…Theoretical concepts of synaptic modification as well as experimental work (Bienenstock et al, 1982;Kirkwood et al, 1996;Toyoizumi et al, 2005) suggest that the threshold for inducing either LTP or LTD at a synapse depends on the history of synaptic activity. According to this concept, previous activity reduces the probability for LTP-induction whereas it increases the probability for LTD-induction.…”
Section: Interaction Of Motor Practice With Pas-induced Plasticitymentioning
confidence: 99%
“…Such a scenario could also apply to cortex, where SFA and STDP coexist (30)(31)(32)(33) and where decay of feedforward connections could be prevented by delayed inputs arising from feedback loops via higher cortical areas (34,35). In computational learning theories, STDP has been linked to temporal difference learning (36) and to maximization of mutual information (37). Here, we extend this list of computational functions of STDP to include gradient-descent error minimization.…”
Section: Discussionmentioning
confidence: 99%