2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852038
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Performance using Approximate State Space Model for Liquid State Machines

Abstract: Liquid State Machine (LSM) is a brain-inspired architecture used for solving problems like speech recognition and time series prediction. LSM comprises of a randomly connected recurrent network of spiking neurons. This network propagates the non-linear neuronal and synaptic dynamics. Maass et al. have argued that the non-linear dynamics of LSMs is essential for its performance as a universal computer. Lyapunov exponent (µ), used to characterize the non-linearity of the network, correlates well with LSM perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 22 publications
1
9
0
Order By: Relevance
“…Recently, a digital implementation of an LSM using these models was proposed for the TI-46 spoken digits recognition task with a spike based local learning rule for the linear classifier [17]. Further, the LSM network was represented using a statespace model and a performance predicting memory metric was extracted [11]. It has been argued that the post synaptic current waveform has a crucial role to play in the classification accuracy [18]- [20] specifically for the speech digit recognition task [17] (Table I).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, a digital implementation of an LSM using these models was proposed for the TI-46 spoken digits recognition task with a spike based local learning rule for the linear classifier [17]. Further, the LSM network was represented using a statespace model and a performance predicting memory metric was extracted [11]. It has been argued that the post synaptic current waveform has a crucial role to play in the classification accuracy [18]- [20] specifically for the speech digit recognition task [17] (Table I).…”
Section: Introductionmentioning
confidence: 99%
“…channels for each input digit sample. The details of the preprocessing stage have been discussed in detail elsewhere [11], [24]. These 500 samples are used for training and testing the network in a 5-fold manner.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The first and most straightforward category is simply post training accuracy. The second type of heuristic consists of a posteriori methods [4][5][6][7] that require some simulation time without the need for training the network on a dataset, e.g., computing the separation distance of closely inputs using the Lyapunov exponent [4]. Lastly, the a priori category encompasses methods that can create reservoirs without the need of simulation [8,9] by algorithmically creating the reservoir from a mathematical definition.…”
Section: Software-based Solutionsmentioning
confidence: 99%
“…Other metrics have been explored to quantify performance based on a posteriori dynamic analysis such as the Lyapunov exponent [4], the average state entropy [5], the dynamic profile of the Jacobian of W R [6] or the approximate state space model [7]. These methods allow for a more guided search on a reservoir's parameters.…”
Section: Dynamical Optimizationmentioning
confidence: 99%