2022
DOI: 10.3389/fnins.2022.865897
|View full text |Cite
|
Sign up to set email alerts
|

A surrogate gradient spiking baseline for speech command recognition

Abstract: Artificial neural networks (ANNs) are the basis of recent advances in artificial intelligence (AI); they typically use real valued neuron responses. By contrast, biological neurons are known to operate using spike trains. In principle, spiking neural networks (SNNs) may have a greater representational capability than ANNs, especially for time series such as speech; however their adoption has been held back by both a lack of stable training algorithms and a lack of compatible baselines. We begin with a fairly t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 83 publications
0
18
0
Order By: Relevance
“…In this study, we use the adaptive leaky integrate-and-fire neuron (AdLIF) model (Bittar and Garner, 2022) with updated parameter boundaries. To highlight this modification, our neuron model is called the constrained AdLIF, (cAdLIF).…”
Section: Spiking Neuronsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this study, we use the adaptive leaky integrate-and-fire neuron (AdLIF) model (Bittar and Garner, 2022) with updated parameter boundaries. To highlight this modification, our neuron model is called the constrained AdLIF, (cAdLIF).…”
Section: Spiking Neuronsmentioning
confidence: 99%
“…This contrasts with the adaptive neuronal threshold adaptation, in which only the spiking activity is taken into account (Yin et al, 2021). This model was chosen because of its proven superior performance in comparison with a leaky integrate-and-fire (LIF) model with an adaptive neuronal threshold (Bittar and Garner, 2022). The computational graph of the neuron model, rolled out over time, is shown in Figure 2.…”
Section: Spiking Neuronsmentioning
confidence: 99%
See 2 more Smart Citations
“…Namely, neuromorphic computing has been advancing machine intelligence in energy efficiency, and recent evidence shows that it improves also conventional metrics of SOTA performance such as accuracy, reward, or speed. For example, spike-based models achieve processing speed and energy efficiency through the imitation of biological neuronal activations, without trading off performance (Jeffares et al 2022), or with minimal trade-offs (Bittar and Garner 2022); short-term plasticity (STP) improves the performance of neural networks in dynamic tasks such as video processing, navigation, robotics, and video games (Moraitis et al 2020, Garcia Rodriguez et al c) Classification: SoftHebb's unsupervised algorithm can be used to cluster an input dataset into classes, either on its own through its Bayesian inference of the classes as hidden causes of the input, or with an added supervised linear classifier. E.g.…”
Section: Introductionmentioning
confidence: 99%