2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892379
|View full text |Cite
|
Sign up to set email alerts
|

Spikemax: Spike-based Loss Methods for Classification

Abstract: Spiking Neural Networks (SNNs) are a promising research paradigm for low power edge-based computing. Recent works in SNN backpropagation has enabled training of SNNs for practical tasks. However, since spikes are binary events in time, standard loss formulations are not directly compatible with spike output. As a result, current works are limited to using meansquared loss of spike count. In this paper, we formulate the output probability interpretation from the spike count measure and introduce spike-based neg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…Our experiments seem to agree with the above and further confirm the great potential of using spike timing as part of the solution to an ASR problem. Furthermore, the use of spike-based losses [31] can expedite decision-making, thereby reducing the impact of additional latency even more.…”
Section: Discussionmentioning
confidence: 99%
“…Our experiments seem to agree with the above and further confirm the great potential of using spike timing as part of the solution to an ASR problem. Furthermore, the use of spike-based losses [31] can expedite decision-making, thereby reducing the impact of additional latency even more.…”
Section: Discussionmentioning
confidence: 99%
“…Secondly, after audio is encoded, the actual execution of the audio processing in the neuromorphic domain is a very open research opportunity. Neuromorphic audio processing systems can employ a wide variety of strategies to perform processing in the neuromorphic domain, such as simplistic DNN conversion [66], using a network of feedforward or recurrent leaky integrate-and-fire neurons [7,67,68], a network of complex resonate-and-fire neurons [52], or a sigma-delta neural network (SDNN) as we describe in the following subsection for our baseline solution. Methodologies inspired by conventional deep learning, e.g.…”
Section: Neuromorphic Audio Processing and Promising Directionsmentioning
confidence: 99%
“…The axonal delays endow the network with a short-term memory capability that allows the interaction of audio/features originating at different points in time. Learnable axonal delays have been shown to increase the expressivity and performance of networks, particularly for applications with spatio-temporal features [68,77]. Audio denoising is one such application.…”
Section: Sigma-delta Neural Network Architecturementioning
confidence: 99%
“… Method Acc (%) DVSGesture Hetero. RSNN (Perez-Nieves et al, 2021 ) 82.9 SLAYER (Shrestha and Orchard, 2018 ) 93.64 ± 0.49 DECOLLE (Kaiser et al, 2020 ) 95.54 SLAYER + SpikeMax (Shrestha et al, 2022 ) 95.83 ± 0.48 PLIF (Fang et al, 2021 ) (STBP) 97.57 STSC-SNN (Yu et al, 2022 ) 98.96 Ours (CUBA-LIF) 92.8 SHD Hetero. RSNN (Perez-Nieves et al, 2021 ) 82.7 ± 0.8 RSNN with data aug. + noise (Cramer et al, 2022 ) 83.2 ± 1.3 Adaptive SRNN (Yin et al, 2021 ) 90.4 TA-SNN (Yao et al, 2021 ) 91.08 STSC-SNN (Yu et al, 2022 ) 92.36 RadLIF (Bittar and Garner, 2022 ) 94.62 …”
Section: Table A1mentioning
confidence: 99%