2019
DOI: 10.1109/jetcas.2019.2951121
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous Spiking Neurons, the Natural Key to Exploit Temporal Sparsity

Abstract: Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 40 publications
(56 reference statements)
0
12
0
Order By: Relevance
“…The main idea behind this approachn [14,4] ] is to use temporal sparsity when working with sequential data.…”
Section: Deltanetworkmentioning
confidence: 99%
See 1 more Smart Citation
“…The main idea behind this approachn [14,4] ] is to use temporal sparsity when working with sequential data.…”
Section: Deltanetworkmentioning
confidence: 99%
“…Authors of [4,14] call this approach "Hysteresis Quantizer". To implement it, it is necessary to introduce an additional variable into each neuron; such variable will be used to record last transmitted value.…”
Section: Deltanetworkmentioning
confidence: 99%
“…This paper mainly focuses on the sparsity within one frame, which is also called spatial sparsity. Besides the spatial sparsity, there is also temporal sparsity among frames [43]. If data rarely change over time, data is regarded as temporarily sparse.…”
Section: E Rmentioning
confidence: 99%
“…The addition of thresholding logic to suppress propagation of small deltas reduces computation counts even further [11]. Sparse execution of neural networks is far more efficient on asynchronous architectures [22], than on statically scheduled architectures such as CPUs and GPUs.…”
Section: Related Workmentioning
confidence: 99%