2021 IEEE International Symposium on Circuits and Systems (ISCAS) 2021
DOI: 10.1109/iscas51556.2021.9401667
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Latency in a Converted Spiking Video Segmentation Network

Abstract: Spiking Neural Networks (SNNs) can be configured to produce almost-equivalent accurate Analog Neural Networks (ANNs) by various ANN-SNN conversion methods. Most of these methods are applied to classification and object detection networks tested on frame-based datasets. In this work, we demonstrate a converted SNN for image segmentation and applied to a natural video dataset. Instead of resetting the network state with each input frame, we capitalize on the temporal redundancy between adjacent frames in a natur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Conventionally, there are two distinct routes to train a functional SNN model: (1) training SNN from scratch; (2) converting a pretrained DNN to SNN. Methods for direct training of SNNs have made tremendous progress recently, but are often computationally expensive and difficult to scale up to some of the more challenging tasks [48]. Conversion of pretrained DNNs to SNNs is a more straightforward approach to achieve an SNN with competitive state-of-the-art accuracy to the DNN on a particular task.…”
Section: Theoretical Model Of Snnmentioning
confidence: 99%
“…Conventionally, there are two distinct routes to train a functional SNN model: (1) training SNN from scratch; (2) converting a pretrained DNN to SNN. Methods for direct training of SNNs have made tremendous progress recently, but are often computationally expensive and difficult to scale up to some of the more challenging tasks [48]. Conversion of pretrained DNNs to SNNs is a more straightforward approach to achieve an SNN with competitive state-of-the-art accuracy to the DNN on a particular task.…”
Section: Theoretical Model Of Snnmentioning
confidence: 99%
“…The training of SNNs is mainly divided into the following three categories: gradient backpropagation-based methods (Wu et al 2018(Wu et al , 2019Zheng et al 2021;Shen, Zhao, and Zeng 2022a;Li et al 2022c;Deng et al 2022), spiking time-dependent plasticity (STDP)-based methods (Diehl and Cook 2015;Hao et al 2020;Zhao et al 2020;Dong et al 2022), and conversion-based methods (Han, Srinivasan, and Roy 2020;Bu et al 2021;Li and Zeng 2022;Liu et al 2022;Li et al 2022b). With these proposed algorithms, SNNs show excellent performance in various complex scenarios (Stagsted et al 2020;Godet et al 2021;Sun, Zeng, and Zhang 2021;Cheni et al 2021). In particular, SNNs have shown promising results in processing neuromorphic, event-based data due to their ability to process information in the time dimension (Xing, Di Caterina, and Soraghan 2020;Chen et al 2020;Viale et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…However, it might cause decreased accuracy when outdated information from the previous frame is carried over. To capitalize on the useful initialization, we previously proposed an adaptable interval reset of the network state [6], which can achieve 35x speedup with acceptable accuracy loss.…”
Section: Introductionmentioning
confidence: 99%