2023
DOI: 10.1109/jproc.2023.3308088
|View full text |Cite
|
Sign up to set email alerts
|

Training Spiking Neural Networks Using Lessons From Deep Learning

Jason K. Eshraghian,
Max Ward,
Emre O. Neftci
et al.

Abstract: The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like. This article serves as a tutorial and perspective showing how to apply the lessons learned from several decades of research in deep learning, gradient descent, backpropagation, and

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 215 publications
(47 citation statements)
references
References 210 publications
0
27
0
Order By: Relevance
“…Specifically, a fully‐connected three‐layer SNN have been stimulated by utilizing the snnTorch platform, as shown in Figure a. [ 30,31 ] The spiking neuron can be described by a LIF neuronal model based on the experimental data in Figure 3c,d, where the firing rate has been fitted for training SNN, as exhibited in Figure 5b,c respectively. The rate of spikes sent to the input layer are proportional to the grayscale values of each pixel in the identified image, referring to Figure S10 (Supporting Information) for details.…”
Section: Resultsmentioning
confidence: 99%
“…Specifically, a fully‐connected three‐layer SNN have been stimulated by utilizing the snnTorch platform, as shown in Figure a. [ 30,31 ] The spiking neuron can be described by a LIF neuronal model based on the experimental data in Figure 3c,d, where the firing rate has been fitted for training SNN, as exhibited in Figure 5b,c respectively. The rate of spikes sent to the input layer are proportional to the grayscale values of each pixel in the identified image, referring to Figure S10 (Supporting Information) for details.…”
Section: Resultsmentioning
confidence: 99%
“…Another common training method uses the surrogate gradient learning algorithm, as described in the study of Eshraghian et al [ 32 ] This approach leverages the sigmoid function gradient instead of the actual neuron gradient. The weight update rules for this algorithm can be expressed asωijl=ωijlαEωijl, where Eωijl=ESSVVIIωijl$$\omega_{i j}^{l} = \omega_{i j}^{l} - \alpha \frac{\partial E}{\partial \omega_{i j}^{l}} , \textrm{ } \textrm{ } \text{where} \textrm{ } \textrm{ } \frac{\partial E}{\partial \omega_{i j}^{l}} = \frac{\partial E}{\partial S} \frac{\partial S}{\partial V} \frac{\partial V}{\partial I} \frac{\partial I}{\partial \omega_{i j}^{l}}$$where S is the spike operator function, which is nondifferentiable and cannot be used for gradient descent optimization.…”
Section: Methodsmentioning
confidence: 99%
“…Neuromorphic Hardware Simulation: The simulation was carried out in Python 3 and based on the snnTorch framework. [37] The autoencoder was based on the encoder-code-decoder structure. The public dataset for anomaly detection was from the National Aeronautics and Space Administration (NASA) Mars Science Laboratory (MSL) (available at https:// github.com/NetManAIOps/OmniAnomaly).…”
Section: Methodsmentioning
confidence: 99%