2021 IEEE International Symposium on Circuits and Systems (ISCAS) 2021
DOI: 10.1109/iscas51556.2021.9401530
|View full text |Cite
|
Sign up to set email alerts
|

A Crossbar Array of Analog-Digital-Hybrid Volatile Memory Synapse Cells for Energy-Efficient On-Chip Learning

Abstract: Conventional-silicon-transistor-based Volatile Memory (VM) synapse has been proposed as an alternative to Non Volatile Memory (NVM) synapse in crossbar-array-based neuromorphic/ in-memory-computing systems. Here, through SPICE simulations, we have designed an analog-digital-hybrid Volatile Memory Synapse Cell (VMSC) for such a crossbar array of VM synapses. In our VMSC, the transistor synapse stores nearly analog values of weight. But the other transistors, which carry out the weight update for the transistor … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 36 publications
0
4
0
Order By: Relevance
“…In ref. 4, the power dissipation issue is considered by altering the NN training method such that the weight update is quantized and confined to only two bits. By removing the additional 1.6 fF capacitor and relying solely on the gate to source capacitance of MOSFETs, the area footprint is also addressed and decreased.…”
Section: Synaptic Devicesmentioning
confidence: 99%
See 2 more Smart Citations
“…In ref. 4, the power dissipation issue is considered by altering the NN training method such that the weight update is quantized and confined to only two bits. By removing the additional 1.6 fF capacitor and relying solely on the gate to source capacitance of MOSFETs, the area footprint is also addressed and decreased.…”
Section: Synaptic Devicesmentioning
confidence: 99%
“…By removing the additional 1.6 fF capacitor and relying solely on the gate to source capacitance of MOSFETs, the area footprint is also addressed and decreased. 4 4.2.6. RRAM-based synaptic device.…”
Section: Synaptic Devicesmentioning
confidence: 99%
See 1 more Smart Citation
“…The most intensive and frequently occurring computational task in implementing FCNN is vector-matrix multiplication (VMM) that is the multiplication of an input vector with a weight matrix that can be utilized by taking advantage of the parallel computing capability of a crossbar memory array which makes it easy to realize VMM operation. Weight matrix values are stored by utilizing memory devices called synaptic devices such as metal oxide semiconductor field effect transistor (MOSFET) [6], resistive random-access memory (RRAM), phase change memory (PCM), analog-to-digital hybrid volatile memory, and magnetic DW devices [12,13]. Training in hardware NN is done by updating the conductances of these synaptic devices that are mapped to the weight values stored in synaptic devices after each epoch followed by the activation function [14,15].…”
Section: Introductionmentioning
confidence: 99%