2019
DOI: 10.1101/766758
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning long temporal sequences in spiking networks by multiplexing neural oscillations

Abstract: Many cognitive and behavioral tasks -such as interval timing, spatial navigation, motor control and speech -require the execution of preciselytimed sequences of neural activation that cannot be fully explained by a succession of external stimuli. We use a reservoir computing framework to explain how such neural sequences can be generated and employed in temporal tasks. We propose a general solution for recurrent neural networks to autonomously produce rich patterns of activity by providing a multi-periodic osc… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 61 publications
(93 reference statements)
0
2
0
Order By: Relevance
“…Another recent approach involves multiplexing oscillations in a spiking neural network (Vincent-Lamarre et al, 2020;Miall, 1989). Two input units inject sine-waves into a reservoir of neurons and the spiking dynamics in the reservoir follow a stable and unique pattern, which enables the learning of a long and stable output.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Another recent approach involves multiplexing oscillations in a spiking neural network (Vincent-Lamarre et al, 2020;Miall, 1989). Two input units inject sine-waves into a reservoir of neurons and the spiking dynamics in the reservoir follow a stable and unique pattern, which enables the learning of a long and stable output.…”
Section: Discussionmentioning
confidence: 99%
“…A number of approaches have been developed in recent years to stabilize the spiking dynamics of SNNs while retaining sufficient variability for output learning (Laje and Buonomano, 2013;Hennequin et al, 2014;Pehlevan et al, 2018;Vincent-Lamarre et al, 2020). To improve stability, recent approaches used feed-forward structures (Pehlevan et al, 2018) or employed supervised learning rules (Laje and Buonomano, 2013).…”
Section: Introductionmentioning
confidence: 99%