2020
DOI: 10.48550/arxiv.2001.01682
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recognizing Images with at most one Spike per Neuron

Abstract: In order to port the performance of trained artificial neural networks (ANNs) to spiking neural networks (SNNs), which can be implemented in neuromorphic hardware with a drastically reduced energy consumption, an efficient ANN to SNN conversion is needed. Previous conversion schemes focused on the representation of the analog output of a rectified linear (ReLU) gate in the ANN by the firing rate of a spiking neuron. But this is not possible for other commonly used ANN gates, and it reduces the throughput even … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Alternatively, other approaches such as [8,16] can be applied to any type of network. The first one manages to do this by using circuits of neurons in order to approximate arbitrary functions.…”
Section: Conversion Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternatively, other approaches such as [8,16] can be applied to any type of network. The first one manages to do this by using circuits of neurons in order to approximate arbitrary functions.…”
Section: Conversion Methodsmentioning
confidence: 99%
“…In order to overcome the aforementioned challenges, some approaches use conversion methods [6][7][8], where they train non-spiking ANNs and then approximate their computations using an SNN. Compared to directly training an SNN, these methods are not able to perform online learning, they lose temporal resolution, and in most cases they have higher latency and energy consumption.…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, other approaches such as [8,12] can be applied to any type of network. The first one manages to do this by using circuits of neurons in order to approximate arbitrary functions.…”
Section: Conversion Methodsmentioning
confidence: 99%
“…In order to overcome the aforementioned challenges, some approaches use conversion methods [6,7,8], where they train non-spiking ANNs and then approximate their computations using an SNN. Compared to directly training an SNN, these methods have higher latency and energy consumption, they are not able to perform online learning and lose temporal resolution.…”
Section: Introductionmentioning
confidence: 99%
“…Conversion approaches focus mostly on computer vision domain and CNN-to-SNN conversion [ 43 ]; however, there has been an attempt at converting recurrent neural networks as well [ 59 ]. Resulting SNNs usually are rate-coded, but there have been attempts to use latency coding as well [ 60 ]. Most conversion approaches are based on converting neurons with rectified linear unit (ReLU) activation function to IF neurons.…”
Section: Types Of Learning Approachesmentioning
confidence: 99%