2021
DOI: 10.1109/tcsi.2021.3052885
|View full text |Cite
|
Sign up to set email alerts
|

A Fast and Energy-Efficient SNN Processor With Adaptive Clock/Event-Driven Computation Scheme and Online Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(28 citation statements)
references
References 24 publications
0
28
0
Order By: Relevance
“…Spiking neural networks (SNNs) (Maass, 1997 ) have attracted increasing attention because of their characteristics, including preferable biological interpretability and low-power processing potential (Akopyan et al, 2015 ; Shen et al, 2016 ; Davies et al, 2018 ; Moradi et al, 2018 ; Pei et al, 2019 ; Li et al, 2021 ; Pham et al, 2021 ). Compared to traditional artificial neural networks (ANNs), SNNs increase the time dimension so that they naturally support information processing in the temporal domain.…”
Section: Introductionmentioning
confidence: 99%
“…Spiking neural networks (SNNs) (Maass, 1997 ) have attracted increasing attention because of their characteristics, including preferable biological interpretability and low-power processing potential (Akopyan et al, 2015 ; Shen et al, 2016 ; Davies et al, 2018 ; Moradi et al, 2018 ; Pei et al, 2019 ; Li et al, 2021 ; Pham et al, 2021 ). Compared to traditional artificial neural networks (ANNs), SNNs increase the time dimension so that they naturally support information processing in the temporal domain.…”
Section: Introductionmentioning
confidence: 99%
“…Limited resources induce a latency-accuracy trade-off at individual agents (also attention vs. precision in robotics): sensors can either send raw, inaccurate measurements to the base station, or refine them locally before transmission, incurring extra processing delay due to hardware-constrained computation. Common options are averaging or filtering of noisy samples, compression of images or other high-dimensional data [18], [19], or descent direction computation in online learning [20], [21]. The dynamical nature of the monitored system makes such delayed processed measurements obsolete, so that sensing design for multiple, possibly heterogeneous, agents becomes nontrivial at network level, and may require online adaptation of local processing.…”
Section: Introductionmentioning
confidence: 99%
“…To fully exploit the computational and energy efficiency of SNNs, various dedicate neuromorphic chips and hardware systems have recently been designed [ 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 ]. These VLSI chips support various spiking neuron models at different levels of biological fidelity and computational complexity, and generally adopt scalable routing schemes including crossbars and network-on-chip (NoC) infrastructures towards large-scale or even brain-scale multichip systems.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, general-purpose computers such as the Central Processing Unit (CPU) and the Graphics Processing Unit (GPU) are incompetent in deploying brain-inspired SNN models, as those von Neumann machines are oriented for dense numerical calculations rather than sparse temporal spike processing. To fully exploit the computational and energy efficiency of SNNs, various dedicate neuromorphic chips and hardware systems have recently been designed [2][3][4][5][6][7][8][9][10][11][12][13][14]. These VLSI chips support various spiking neuron models at different levels of biological fidelity and computational complexity, and generally adopt scalable routing schemes including crossbars and network-on-chip (NoC) infrastructures towards large-scale or even brain-scale multichip systems.…”
Section: Introductionmentioning
confidence: 99%