2007
DOI: 10.1016/j.neunet.2007.04.004
|View full text |Cite
|
Sign up to set email alerts
|

The cerebellum as a liquid state machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
142
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 167 publications
(146 citation statements)
references
References 15 publications
4
142
0
Order By: Relevance
“…The GR activity was a sparse representation of the input signal, so each simulation time sample corresponded to a different state of the granular layer. 31 The IOs received the US as a low-frequency random spike pattern 32 , not depending on the dynamics of the network but associated to the US event. The IOs were connected one by one to PCs through the Climbing Fibers (CFs).…”
Section: Computational Cerebellar Model and Optimizationmentioning
confidence: 99%
“…The GR activity was a sparse representation of the input signal, so each simulation time sample corresponded to a different state of the granular layer. 31 The IOs received the US as a low-frequency random spike pattern 32 , not depending on the dynamics of the network but associated to the US event. The IOs were connected one by one to PCs through the Climbing Fibers (CFs).…”
Section: Computational Cerebellar Model and Optimizationmentioning
confidence: 99%
“…Functionally, a classification machine able to categorize any spatial pattern would show great divergence (in recent literature represented sparsity and decorrelations (Franzius et al 2007;Wyss et al 2003)), while a device transforming spatio in temporal patterns would show great convergence [a connectivity pattern that is at the core of echo-state networks and liquid-state machines (Yamazaki and Tanaka 2007)]. Should we be amazed that mechanistic hypotheses can be deduced from macroscopic features of the brain?…”
Section: Connection Schemes and Myelo-architectonicsmentioning
confidence: 99%
“…Therefore, the states of this dynamic reservoir are linearly combined in an output layer, which is the sole trained part of the architecture. This type of state-dependent computation has been proposed as a biologically plausible model for cortical processing (Buonomano and Maass, 2009;Maass et al, 2002;Yamazaki and Tanaka, 2007). Such theoretical models include: Echo State Networks (Jaeger and Haas, 2004) for analog neurons and Liquid State Machines (Maass et al, 2002) for spiking neurons.…”
Section: Introductionmentioning
confidence: 99%