2021
DOI: 10.3389/fnins.2021.651141
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Artificial and Spiking Neural Networks on Digital Hardware

Abstract: Despite the success of Deep Neural Networks—a type of Artificial Neural Network (ANN)—in problem domains such as image recognition and speech processing, the energy and processing demands during both training and deployment are growing at an unsustainable rate in the push for greater accuracy. There is a temptation to look for radical new approaches to these applications, and one such approach is the notion that replacing the abstract neuron used in most deep networks with a more biologically-plausible spiking… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 83 publications
(39 citation statements)
references
References 24 publications
0
39
0
Order By: Relevance
“…However, previous work relied on rate-based or time-to-first-spike coding schemes. Here, we expanded ITL techniques into the realm of surrogate gradient learning, which flexibly interpolates between rate- and timing-based coding schemes on multilayer and recurrent architectures, thereby simultaneously improving performance and energy efficiency, while also being conducive for fast inference ( 24 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, previous work relied on rate-based or time-to-first-spike coding schemes. Here, we expanded ITL techniques into the realm of surrogate gradient learning, which flexibly interpolates between rate- and timing-based coding schemes on multilayer and recurrent architectures, thereby simultaneously improving performance and energy efficiency, while also being conducive for fast inference ( 24 ).…”
Section: Discussionmentioning
confidence: 99%
“…First, one has to overcome the binary nature of spikes, which impedes vanilla gradient descent ( 21 23 ). Second, training has to ensure sparse spiking activity to exploit the superior power efficiency of SNN processing ( 24 , 25 ). Finally, training has to achieve all of the above while coping with analog hardware imperfections inevitably tied to their manufacturing process.…”
mentioning
confidence: 99%
“…This is a simplification, because in reality the actual energy consumption also depends on the specific hardware that the networks run on. However, this is a widely adopted metric because it is simple to calculate and allows for an approximate comparison [49,14,50,33,51]. In [52], an overview over different neuromorphic architectures is given that shows a range of 2.8-360 pJ per synaptic operation, i.e., a factor of 100 between different hardware implementations is possible for SNNs.…”
Section: Appendixmentioning
confidence: 99%
“…System efficiency comes from sensing and data processing. Unlike classical vision systems, neuromorphic systems try to efficiently capture a notion of seeing motion [5][6][7][8][9]. Bio-inspired learning methods, i.e., spiking neural networks (SNNs), address issues related to energy efficiency [5,[10][11][12][13][14][15][16][17][18][19][20][21][22].…”
Section: Introductionmentioning
confidence: 99%