2018
DOI: 10.3389/fnins.2018.00435
|View full text |Cite
|
Sign up to set email alerts
|

Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning

Abstract: Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biological… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
112
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 165 publications
(114 citation statements)
references
References 33 publications
0
112
0
2
Order By: Relevance
“…Current SNN models for pattern recognition can be generally categorized into three classes: that is, indirect training [12,13,14,15,16,17,18,19], direct SL training with BP [11,26,20,21,22,23,53], and plasticity-based unsupervised training with supervised modules [54,24,25]. for optimal initial weights and then used current-based BP to re-train all-layer weights in a supervised way [53]; however, this also resulted in the model being bio-implausible due to the use of the BP algorithm.…”
Section: Comparison With Other Snn Modelsmentioning
confidence: 99%
“…Current SNN models for pattern recognition can be generally categorized into three classes: that is, indirect training [12,13,14,15,16,17,18,19], direct SL training with BP [11,26,20,21,22,23,53], and plasticity-based unsupervised training with supervised modules [54,24,25]. for optimal initial weights and then used current-based BP to re-train all-layer weights in a supervised way [53]; however, this also resulted in the model being bio-implausible due to the use of the BP algorithm.…”
Section: Comparison With Other Snn Modelsmentioning
confidence: 99%
“…However, due to discontinuous nature of communication in SNNs, direct implementing of the algorithm in DSNNs is still a challenge but there are some works trying to overcome the challenge by finding learning rules that can achieve high performance as observed in the brains [37][38][39] or by treating membrane potential of neurons as signals that shows discontinuities at spike times as noise [40].…”
Section: Discussionmentioning
confidence: 99%
“…Note that N miss + N hit = N , if the decision of the network is based on the maximum potential of the network, if the decision of the network is based on the early spike N miss + N hit ≤ N because there might be no spikes for some inputs. Others have proposed error back propagation through all layers in spiking networks [18] [26], but this work focuses on using classifiers like a simple two layer back propagation, SVM, R-STDP.…”
Section: Reward Modulated Stdpmentioning
confidence: 99%