2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00266
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
283
3
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 331 publications
(289 citation statements)
references
References 48 publications
1
283
3
2
Order By: Relevance
“…The classification accuracy of the proposed network along with those of other CSNNs trained by different learning strategies including surrogate gradient learning, ANN-to-SNN conversion, and tandem learning are presented in Table 3. Our proposed network could reach 93.11% categorization accuracy on Cifar10 with T = 60 and outperform any other CSNN trained listed in Table 3, except Fang et al (2021) [31] that use surrogate gradient in CSNNs with Leaky-IF neurons with trainable membrane time constants (i.e., each spiking neuron layer has an independent and trainable membrane time constant). Although, they reached 0.04% better accuracy than us, implementing large CSNNs with Leaky-IF neurons having different time constants, independent of the implementation platform, is highly expensive in terms of memory and computation.…”
Section: Cifar10mentioning
confidence: 84%
See 1 more Smart Citation
“…The classification accuracy of the proposed network along with those of other CSNNs trained by different learning strategies including surrogate gradient learning, ANN-to-SNN conversion, and tandem learning are presented in Table 3. Our proposed network could reach 93.11% categorization accuracy on Cifar10 with T = 60 and outperform any other CSNN trained listed in Table 3, except Fang et al (2021) [31] that use surrogate gradient in CSNNs with Leaky-IF neurons with trainable membrane time constants (i.e., each spiking neuron layer has an independent and trainable membrane time constant). Although, they reached 0.04% better accuracy than us, implementing large CSNNs with Leaky-IF neurons having different time constants, independent of the implementation platform, is highly expensive in terms of memory and computation.…”
Section: Cifar10mentioning
confidence: 84%
“…Interestingly, our network with proxy learning could surpass other networks trained with surrogate gradient learning (SGL). To do a fair comparison, we trained the same CSNN as ours using surrogate gradient learning method (with arctangent surrogate function [31]) that reached to the best accuracy of 94.41% with T = 50. We also trained a CANN with the same architecture to our CSNN using backpropagation that reached to 94.60% accuracy best (it is 0.04% better than proxy learning).…”
Section: Fashion-mnistmentioning
confidence: 99%
“…While this is a simple method to reduce SNN latency, it may potentially have limitations in neuromorphic chip designs in terms of spike routing or parallel spike processing capability. It is worth mentioning here that additional optimizations like learnable membrane time constants (Rathi and Roy, 2020 ; Fang et al, 2021b ), network architectures like Residual networks (Fang et al, 2021a ), conversion error calibration techniques (Deng and Gu, 2021 ; Li et al, 2021 ), hybrid spike encoding (Datta et al, 2021 ) are complementary to the current proposal and can be augmented in the algorithm to further minimize the inference latency. Tables 2 , 3 therefore includes primarily basic SNN architectures based on IF nodes without any additional optimizations to substantiate the importance and interpretability of the need for layerwise threshold optimization.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Acc. GCN [28] 54.00 LIF-SCNN [5] 60.50 tdBN-RseNet18 [6] 67.80 PLIF [29](T=20) 74.80 LIF-R18 -From scratch 53.64 LIF-R18pipe-D (reuse ImageNet Exp.) 62.89 LIF-R18pipe-D (reuse ES-ImageNet Exp.)…”
Section: Network Pretrain Methodsmentioning
confidence: 99%