2021
DOI: 10.1101/2021.11.20.469405
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evolutionary and spike-timing-dependent reinforcement learning train spiking neuronal network motor control

Abstract: Biological learning operates at multiple interlocking timescales, from long evolutionary stretches down to the relatively short time span of an individual’s life. While each process has been simulated individually as a basic learning algorithm in the context of spiking neuronal networks (SNNs), the integration of the two has remained limited. In this study, we first train SNNs separately using individual model learning using spike-timing dependent reinforcement learning (STDP-RL) and evolutionary (EVOL) learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 89 publications
(153 reference statements)
0
5
0
Order By: Relevance
“…Inspired by these mechanisms, Najarro and Risi further showed added advantage to the hebbian and other biological plasticity mechanisms. Including additional plasticity mechanisms and randomizing synaptic connections and weights might lead to even better performance as such capabilities of multi-timescale learning in SNNs for learning visual-motor behaviors have been shown in another study [ 95 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired by these mechanisms, Najarro and Risi further showed added advantage to the hebbian and other biological plasticity mechanisms. Including additional plasticity mechanisms and randomizing synaptic connections and weights might lead to even better performance as such capabilities of multi-timescale learning in SNNs for learning visual-motor behaviors have been shown in another study [ 95 ].…”
Section: Discussionmentioning
confidence: 99%
“…could use different types of dopamine receptors), but we are not aware of any direct experimental evidence of how associations between different rewards and respective motor actions take place at multiple time scales. Combining different timescales of reward/behavior in parallel, or independently [ 95 ] has also been shown to enhance performance. The time-scales underlying other learning mechanisms (such as homeostatic plasticity, sleep consolidation, etc) could provide additional benefits [ 105 ].…”
Section: Discussionmentioning
confidence: 99%
“…could use different types of dopamine receptors), but we are not aware of any direct experimental evidence of how associations between different rewards and respective motor actions take place at multiple time scales. Combining different timescales of reward/behavior in parallel, or independently [96] has also been shown to enhance performance. The time-scales underlying other learning mechanisms (such as homeostatic plasticity, sleep consolidation, etc) could provide additional benefits [106].…”
Section: Discussionmentioning
confidence: 99%
“…Inspired by these mechanisms, Najarro and Risi further showed added advantage to the hebbian and other biological plasticity mechanisms. Including additional plasticity mechanisms and randomizing synaptic connections and weights might lead to even better performance as such capabilities of multi-timescale learning in SNNs for learning visual-motor behaviors have been shown in another study [96].…”
Section: Discussionmentioning
confidence: 99%
“…Automated optimization methods have been previously used for simpler networks (e.g. recurrent point-neuron spiking networks) (Nicola and Clopath 2017; Sussillo and Abbott 2009; Dura-Bernal et al 2017; Carlson et al 2014; Hasegan et al 2021). However, optimization of large-scale biophysically-detailed networks typically requires expert-guided parameter adjustments (Bezaire et al 2016; Markram et al 2015), for example through parameter sweeps (grid search) (Billeh et al 2020).…”
Section: Methodsmentioning
confidence: 99%