2022
DOI: 10.3389/fnins.2022.1018006
|View full text |Cite
|
Sign up to set email alerts
|

E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware

Abstract: IntroductionIn recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.g., optimization for speakers for speech processing. Also, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…Even though the deployed algorithm can be further optimized for SENECA (for example, by quantization, sparsification, and spike grouping), it demonstrates the capability of SENECA to execute such a complex pipeline efficiently. Due to the algorithm's popularity, e-prop and its close variants have been benchmarked on several other neuromorphic processors (Tang et al, 2021;Frenkel and Indiveri, 2022;Perrett et al, 2022;Rostami et al, 2022). Those implementations are either forced to be (1) less efficient due to hardware-algorithm mismatch (Tang et al, 2021;Perrett et al, 2022;Rostami et al, 2022) or (2) hard-wired only to execute a limited version of this algorithm (Frenkel and Indiveri, 2022) which cannot adapt to deploy the new and more efficient online learning algorithms (Yin et al, 2021;Bohnstingl et al, 2022).…”
Section: Recurrent On-device Learning With E-propmentioning
confidence: 99%
“…Even though the deployed algorithm can be further optimized for SENECA (for example, by quantization, sparsification, and spike grouping), it demonstrates the capability of SENECA to execute such a complex pipeline efficiently. Due to the algorithm's popularity, e-prop and its close variants have been benchmarked on several other neuromorphic processors (Tang et al, 2021;Frenkel and Indiveri, 2022;Perrett et al, 2022;Rostami et al, 2022). Those implementations are either forced to be (1) less efficient due to hardware-algorithm mismatch (Tang et al, 2021;Perrett et al, 2022;Rostami et al, 2022) or (2) hard-wired only to execute a limited version of this algorithm (Frenkel and Indiveri, 2022) which cannot adapt to deploy the new and more efficient online learning algorithms (Yin et al, 2021;Bohnstingl et al, 2022).…”
Section: Recurrent On-device Learning With E-propmentioning
confidence: 99%
“…4. For the traditional multi-layer ARCSe neural network the diagram (Left panel) includes the connection weights (27)…”
Section: Application To Neural Network and Machine Learningmentioning
confidence: 99%
“…Spike based methods were also used for object tracking [23][24][25][26] . A research is booming in using LIF spiking networks for online learning 27 , braille letter reading 28 , different neuromorphic synaptic devices 29 for detection and classification of biological problems [30][31][32][33][34][35][36] . Significant research is focused on making human-level control 37 , optimizing back-propagation algorithms for spiking networks [38][39][40] , as well as penetrating much deeper into ARCSes core [41][42][43][44] with smaller number of time steps 41 , using an event-driven paradigm 36,40,45,46 , applying batch normalization 47 , scatter-and-gather optimizations 48 , supervised plasticity 49 , time-step binary maps 50 , and using transfer learning algorithms 51 .…”
mentioning
confidence: 99%