2020
DOI: 10.1002/cta.2753
|View full text |Cite
|
Sign up to set email alerts
|

Digital multiplier‐less implementation of high‐precision SDSP and synaptic strength‐based STDP

Abstract: Spiking neural networks (SNNs) can achieve lower latency and higher efficiency compared with traditional neural networks if they are implemented in dedicated neuromorphic hardware. In both biological and artificial spiking neuronal systems, synaptic modifications are the main mechanism for learning. Plastic synapses are thus the core component of neuromorphic hardware with on-chip learning capability. Recently, several research groups have designed hardware architectures for modeling plasticity in SNNs for var… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 46 publications
0
1
0
Order By: Relevance
“…Moreover, the logic capacity of FPGA to implement complex neural algorithms and prototypes without requiring VLSI chip fabrication makes it a brilliant choice. [19][20][21][22][23][24][25][26][27][28][29][30][31] FPGA-based on-chip learning and off-chip learning, which are sometimes known as online learning and offline learning, are the main methods for learning implementation in the hardware at register transfer level (RTL). 26,27 Motivated by these findings, this paper proposes an efficient and high-speed reconfigurable digital implementation of an SNN using Izhikevich neurons and gradient descent learning on an FPGA to approximate the sigmoid function.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the logic capacity of FPGA to implement complex neural algorithms and prototypes without requiring VLSI chip fabrication makes it a brilliant choice. [19][20][21][22][23][24][25][26][27][28][29][30][31] FPGA-based on-chip learning and off-chip learning, which are sometimes known as online learning and offline learning, are the main methods for learning implementation in the hardware at register transfer level (RTL). 26,27 Motivated by these findings, this paper proposes an efficient and high-speed reconfigurable digital implementation of an SNN using Izhikevich neurons and gradient descent learning on an FPGA to approximate the sigmoid function.…”
Section: Introductionmentioning
confidence: 99%