2018
DOI: 10.3389/fnins.2018.00524
|View full text |Cite
|
Sign up to set email alerts
|

SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition

Abstract: In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
36
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 33 publications
(37 citation statements)
references
References 40 publications
1
36
0
Order By: Relevance
“…In contrast, we analyze the effects of dividing a large liquid into multiple smaller units, while leaving the total number of neurons the same. Research (Srinivasan et al, 2018) also shows that multiple liquids perform better than a single liquid, at higher number of neurons. The input to liquid connections in Srinivasan et al (2018) were trained in an unsupervised manner.…”
Section: Introductionmentioning
confidence: 94%
See 2 more Smart Citations
“…In contrast, we analyze the effects of dividing a large liquid into multiple smaller units, while leaving the total number of neurons the same. Research (Srinivasan et al, 2018) also shows that multiple liquids perform better than a single liquid, at higher number of neurons. The input to liquid connections in Srinivasan et al (2018) were trained in an unsupervised manner.…”
Section: Introductionmentioning
confidence: 94%
“…Research (Srinivasan et al, 2018) also shows that multiple liquids perform better than a single liquid, at higher number of neurons. The input to liquid connections in Srinivasan et al (2018) were trained in an unsupervised manner. Also note that each liquid was fed with distinct parts of an input, and hence is different from this work.…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…Most of the classification performances available in literature for SNNs are for MNIST and CIFAR-10 datasets. The popular methods for SNN training are 'Spike Time Dependent Plasticity (STDP)' based unsupervised learning [7,49,3,42,43] and 'spike-based backpropagation' based supervised learning [24,16,48,30,29]. There are a few works [45,17,46,22] which tried to combine the two approaches to get the best of both worlds.…”
Section: The Classification Performancementioning
confidence: 99%
“…STDP-trained two-layer network (consisting of 6400 output neurons) has been shown to achieve 95% classification accuracy on MNIST dataset. However, shallow network structure limits the expressive power of neural network [7,49,3,42,43] and suffers from scalability issues as the classification performance easily saturates. Layer-wise STDP learning [17,23] has shown the capabilities of efficient feature extraction on multi-layer convolutional SNNs.…”
mentioning
confidence: 99%