2023 IEEE International Symposium on Circuits and Systems (ISCAS) 2023
DOI: 10.1109/iscas46773.2023.10181778
|View full text |Cite
|
Sign up to set email alerts
|

Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…Notably, this performance is competitive compared to these results that employ the same data processing methods and network architecture. Patiño-Saucedo et al (2023) introduce axonal delays in tandem with learnable time constants, enabling a reduction in model size to a mere 0.1 M while preserving competitive performance.…”
Section: Overall Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Notably, this performance is competitive compared to these results that employ the same data processing methods and network architecture. Patiño-Saucedo et al (2023) introduce axonal delays in tandem with learnable time constants, enabling a reduction in model size to a mere 0.1 M while preserving competitive performance.…”
Section: Overall Resultsmentioning
confidence: 99%
“…These developments have spurred the exploration of jointly training synaptic weights and axonal delay in deep SNNs. While earlier research mainly centered on fixed delays with trainable weights (Bohte et al, 2002) and the concurrent training of synaptic weights and delays in shallow SNNs featuring a single layer (Taherkhani et al, 2015;Wang et al, 2019;Zhang et al, 2020), there has recently been a degree of investigation into the joint training of the synaptic weights and axonal delays in deep SNNs (Shrestha and Orchard, 2018;Shrestha et al, 2022;Sun et al, 2022Sun et al, , 2023aHammouamri et al, 2023;Patiño-Saucedo et al, 2023). Our prior effort (Sun et al, 2022) stands as one of the initial successful attempts in applying this method to deep SNNs, achieving promising results in tasks characterized by high temporal complexity.…”
mentioning
confidence: 99%
“…A method for training per neuron axonal delays based on the SLAYER learning paradigm was proposed (Shrestha and Orchard, 2018) and extended (Sun et al, 2023b) with trainable delay caps. Recently, the effects of axonal synaptic delay learning were studied by pruning multiple delay synapses (Patiño-Saucedo et al, 2023), modeling a one-layer multinomial logistic regression with synaptic delays (Grimaldi and Perrinet, 2023) and learning delays represented trough 1D convolutions with learnable spacings (Hammouamri et al, 2023). Similarly, in order to train synaptic delays, spike trains were transformed into continuous analog, differentiable signals (Wang et al, 2019).…”
Section: Figurementioning
confidence: 99%