2021 58th ACM/IEEE Design Automation Conference (DAC) 2021
DOI: 10.1109/dac18074.2021.9586133
|View full text |Cite
|
Sign up to set email alerts
|

Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Based on the neural dynamics, neuroscientists have developed many neural models, represented by the Hodgkin-Huxley (Hodgkin et al, 1952 ), the leaky integrate-and-fire (LIF) (Dayan and Abbott, 2005 ), and the Izhikevich (Izhikevich et al, 2004 ) model. As a simplified model of neuron dynamics, LIF has garnered a great deal of interest from algorithm researchers and has been extensively implemented in SNNs (Lee et al, 2016 ; Wu et al, 2018 ; Cheng et al, 2020 ; Yin et al, 2020 ; Fang et al, 2021 ). The LIF model captures the intuitive properties of external input accumulating charge across a leaky neural membrane with a clear threshold (Tavanaei et al, 2019 ).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Based on the neural dynamics, neuroscientists have developed many neural models, represented by the Hodgkin-Huxley (Hodgkin et al, 1952 ), the leaky integrate-and-fire (LIF) (Dayan and Abbott, 2005 ), and the Izhikevich (Izhikevich et al, 2004 ) model. As a simplified model of neuron dynamics, LIF has garnered a great deal of interest from algorithm researchers and has been extensively implemented in SNNs (Lee et al, 2016 ; Wu et al, 2018 ; Cheng et al, 2020 ; Yin et al, 2020 ; Fang et al, 2021 ). The LIF model captures the intuitive properties of external input accumulating charge across a leaky neural membrane with a clear threshold (Tavanaei et al, 2019 ).…”
Section: Methodsmentioning
confidence: 99%
“…In SNNs, synaptic strengths are described as scalar weights that can be dynamically modified according to a particular learning rule. Actively investigated, the learning rules of SNNs can be generally categorized into three categories: conversion-based methods that map SNNs from trained ANNs (Diehl et al, 2016 ; Hunsberger and Eliasmith, 2016 ; Rueckauer et al, 2016 , 2017 ; Sengupta et al, 2019 ; Han et al, 2020 ); supervised learning with spikes that directly train SNNs using variations of error backpropagation (Lee et al, 2016 ; Shrestha and Orchard, 2018 ; Wu et al, 2018 , 2019 ; Neftci et al, 2019 ; Yin et al, 2020 ; Fang et al, 2021 ); local learning rules at synapses, such as schemes exploring the spike time dependent plasticity (STDP) (Song et al, 2000 ; Nessler et al, 2009 ; Diehl and Cook, 2015 ; Tavanaei et al, 2016 ; Masquelier and Kheradpisheh, 2018 ). In addition to the above-mentioned directions, many new algorithms have emerged, such as: a biological plausible BP implementation in pyramidal neurons based on the Bursting mechanism (Payeur et al, 2021 ); a biologically plausible online learning based on rewards and eligibility traces (Bellec et al, 2020 ); and the target-based learning in recurrent spiking networks (Ingrosso and Abbott, 2019 ; Muratore et al, 2021 ), which provides an alternative to error-based approaches.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In [249], Fang et al propose neuromorphic learning and hardware co-design for temporal pattern learning. Authors first propose an efficient training algorithm for SNN with LIF neurons to learn from temporal data.…”
Section: Hardware-software Co-designmentioning
confidence: 99%