2020
DOI: 10.1007/978-3-030-61616-8_51
|View full text |Cite
|
Sign up to set email alerts
|

Spike-Train Level Unsupervised Learning Algorithm for Deep Spiking Belief Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…The dynamic threshold process in Section 2.2.1 is similar to how existing methods incorporate membrane potential values in the error signal (Gutig, 2016;Xiao et al, 2019;Yu et al, 2019;Li and Yu, 2020), but without strict precision requirements. In principle, the proposed error function may be extended to train deep or recurrent architectures using techniques which propagate error gradients based on the Widrow-Hoff window, such as Wang et al (2016); Lin and Shi (2018); Lin and Du (2020). The missing component is how to correctly incorporate the proposed adaptive 'learning rate' variables in such methods, which we leave to future work.…”
Section: Discussionmentioning
confidence: 99%
“…The dynamic threshold process in Section 2.2.1 is similar to how existing methods incorporate membrane potential values in the error signal (Gutig, 2016;Xiao et al, 2019;Yu et al, 2019;Li and Yu, 2020), but without strict precision requirements. In principle, the proposed error function may be extended to train deep or recurrent architectures using techniques which propagate error gradients based on the Widrow-Hoff window, such as Wang et al (2016); Lin and Shi (2018); Lin and Du (2020). The missing component is how to correctly incorporate the proposed adaptive 'learning rate' variables in such methods, which we leave to future work.…”
Section: Discussionmentioning
confidence: 99%