2019
DOI: 10.48550/arxiv.1903.12272
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Convolutional Spiking Neural Networks for Image Classification

Abstract: Spiking neural networks are biologically plausible counterparts of the artificial neural networks, artificial neural networks are usually trained with stochastic gradient descent and spiking neural networks are trained with spike timing dependant plasticity. Training deep convolutional neural networks is a memory and power intensive job. Spiking networks could potentially help in reducing the power usage. In this work we focus on implementing a spiking CNN using Tensorflow to examine behaviour of the network a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 47 publications
0
12
0
Order By: Relevance
“…To monitor the weight updates (synapse changes) in the spiking network, the software provides the capability to monitor spike activity, weight evolution (updates), feature extraction (spikes per map per label), and synapse convergence, etc. This software tool was used here and in [34] [35]. Similar to our work, Mozafari et al released the software tool SPYKETORCH in [43] which is based on the PYTORCH [44] deep learning tool.…”
Section: Software Toolmentioning
confidence: 98%
“…To monitor the weight updates (synapse changes) in the spiking network, the software provides the capability to monitor spike activity, weight evolution (updates), feature extraction (spikes per map per label), and synapse convergence, etc. This software tool was used here and in [34] [35]. Similar to our work, Mozafari et al released the software tool SPYKETORCH in [43] which is based on the PYTORCH [44] deep learning tool.…”
Section: Software Toolmentioning
confidence: 98%
“…It should be noted, however, that the average number of spikes is greatly increased due to the adaptive sampling increasing the resolution of the signal. It is worth noting that the fitness function can be adjusted, using m and n parameters in (6), based on the specific application requirement to emphasize on the precision or computation efficiency:…”
Section: B Threshold Optimizationmentioning
confidence: 99%
“…In this paper, we are focusing on the encoding of static images for image classification with SNNs into spike trains. Two of the well-known methods for encoding static images are firing rate-based encoding [2]- [4] and population rank order encoding [5], [6]. In the rate-based encoding, each input is a Poisson spike train with a firing rate proportional to the intensity of the of the corresponding pixel in the image [4].…”
Section: Introductionmentioning
confidence: 99%
“…We achieved 98.40% accuracy without more expensive training techniques such as error normalization in [7]. We outperformed convolutional SNNs such as [11], [15] and DNNs such as [1], [12].…”
Section: A Spiking Dataset Classificationmentioning
confidence: 99%
“…The dataset has 20 classes, and splits into 8156 and 2264 samples for training and testing respectively. A sample of the SHD dataset is shown [7] 98.66 Spiking MLP [3] 47.5 Phased LSTM [12] 97.28 R-SNN [3] 83.2 Spiking CNN [11] 95.72 LSTM [3] 89.0 Graph CNN [1] 98.5 R-SNN [20] 82.0 Spiking CNN [15] 98.32 SRNN [18] 84.4 II. We achieved 85.69% accuracy, which is the best in SNN domain.…”
Section: A Spiking Dataset Classificationmentioning
confidence: 99%