2020
DOI: 10.48550/arxiv.2003.11741
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding

Abstract: Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems. The deep neural network-to-SNN conversion approach has been widely studied to broaden the applicability of SNNs. Most previous studies, however, have not fully utilized spatio-temporal aspects of SNNs, which has led to inefficiency in terms of number of spikes and inference latency. In… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(26 citation statements)
references
References 15 publications
(66 reference statements)
0
26
0
Order By: Relevance
“…Neural coding defines the way of representing the information in the form of spike trains including encoding and decoding function [22]. There have been various types of neural coding, such as rate [23,24,25], phase [26], burst [27], temporal-switching-coding (TSC) [28], and TTFS coding [12,16,17,15]. To maximize the efficiency by fully utilizing the temporal information in spike train, TTFS coding, which is known as latency coding, was introduced in SNNs [12].…”
Section: Spiking Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Neural coding defines the way of representing the information in the form of spike trains including encoding and decoding function [22]. There have been various types of neural coding, such as rate [23,24,25], phase [26], burst [27], temporal-switching-coding (TSC) [28], and TTFS coding [12,16,17,15]. To maximize the efficiency by fully utilizing the temporal information in spike train, TTFS coding, which is known as latency coding, was introduced in SNNs [12].…”
Section: Spiking Neural Networkmentioning
confidence: 99%
“…These approach using approximate differentiable spiking function enables efficiency-aware training of deep SNNs. In another approach to fully utilizing energy-efficient potentials, deep SNNs have adopted temporal coding, such as time-to-firstspike (TTFS) [16,17,15], which represents the information with spike time and has shown superior efficiency with fewer spikes. However, their methods in deep SNNs have been limited due to the lack of consideration for efficiency during training.…”
Section: Introductionmentioning
confidence: 99%
“…Temporal coding allows only one spike per neuron, resulting in energy efficiency from fewer spikes. Here, spike latency is inversely proportional to the pixel intensity [35], [36], [31]. Thus, bright pixels generate more spike events in earlier time-steps than dark pixels.…”
Section: Related Work a Spiking Neural Networkmentioning
confidence: 99%
“…More precisely, following the previous works [36], [25], [63], we compute the energy consumption for SNNs by calculating the total number of floating point operations (FLOPs). Also, to compare ANNs and SNNs quantitatively, we calculate the energy based on standard CMOS technology [64] as shown in Table IV.…”
Section: Analysis On Energy-efficiencymentioning
confidence: 99%
“…On custom neuromorphic architectures, such as TrueNorth [30], and SpiNNaker [31], the total energy is estimated as F LOP s * E compute +T * E static [32], where the parameters (E compute , E static ) can be normalized to (0.4, 0.6) and (0.64, 0.36) for TrueNorth and SpiNNaker, respectively [32]. Since the total FLOPs for VGG-16 (>10 9 ) is several orders of magnitude higher than the SOTA T , the total energy of a deep SNN on neuromorphic hardware is compute bound and thus we would see similar energy improvements on them.…”
Section: B Floating Point Operations (Flops) and Compute Energymentioning
confidence: 99%