2019 Conference on Cognitive Computational Neuroscience 2019
DOI: 10.32470/ccn.2019.1173-0
|View full text |Cite
|
Sign up to set email alerts
|

Advantages of heterogeneity of parameters in spiking neural network training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 4 publications
0
8
0
Order By: Relevance
“…Consider also the Fast Sigmoid surrogate gradient (Zenke & Ganguli, 2018;Perez-Nieves & Goodman, 2021) that avoids computing the exponential function in Sigmoid to obtain the gradient:…”
Section: A Proofs Of Theoretical Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Consider also the Fast Sigmoid surrogate gradient (Zenke & Ganguli, 2018;Perez-Nieves & Goodman, 2021) that avoids computing the exponential function in Sigmoid to obtain the gradient:…”
Section: A Proofs Of Theoretical Resultsmentioning
confidence: 99%
“…We implement the technique for fully connected networks, with two hidden layers. Implementing the method for deeper networks should be straightforward as shown in (Perez-Nieves & Goodman, 2021). However, we leave the adaptation of the method with convolutional layers for the future work.…”
Section: Discussionmentioning
confidence: 99%
“…In summary, by leveraging a novel spiking RNN model with in vivo recordings, we have shown that the heterogeneous neural response profiles widely observed during behavior are constrained by local synaptic structures shaped by spike-timing dependent plasticity mechanisms. Our model sits at the nexus between two recent trends in neural network modeling: First, recent work has successfully extended general-purpose learning algorithms (i.e., FORCE) designed for rate-based networks to networks with spiking units 35,4749 . Second, there has been renewed interest in using RNNs to understand the role of biophysically-motivated synaptic plasticity rules in the formation of stable neural assemblies for memory storage and retrieval 32,33,50 .…”
Section: Discussionmentioning
confidence: 99%
“…SNN-IIR [26] is proposed by Fang et al to search for the optimal synapse filter kernels and weights for SNN to learn the spatio-temporal patterns. Nicolas et al [30] propose a sparse backpropagation method for SNNs that is faster and more memory efficient.…”
Section: Related Workmentioning
confidence: 99%
“…Different voxel grids are selected for various datasets, more in detail, 2048, 512, 512, 512 are chosen for the DVS128-Gait-Day, ASL-DVS, N-MNIST and HARDVS datasets. After considering the spatiotemporal discrepancy across different datasets, we set the scale (v h , v w , v t ) of voxel grid as (10,10,10) for ASL-DVS, (4, 4, 4) for DVS128-Gait-Day, (20, 2, 2), (50,30,20) for N-MNIST and HARDVS datasets. When building graphs for the voxel branch, the threshold R is set as 2.…”
Section: B Implementation Detailsmentioning
confidence: 99%