2018
DOI: 10.1109/tbcas.2017.2762002
|View full text |Cite
|
Sign up to set email alerts
|

An On-Chip Learning Neuromorphic Autoencoder With Current-Mode Transposable Memory Read and Virtual Lookup Table

Abstract: This paper presents an IC implementation of on-chip learning neuromorphic autoencoder unit in a form of rate-based spiking neural network. With a current-mode signaling scheme embedded in a 500 × 500 6b SRAM-based memory, the proposed architecture achieves simultaneous processing of multiplications and accumulations. In addition, a transposable memory read for both forward and backward propagations and a virtual lookup table are also proposed to perform an unsupervised learning of restricted Boltzmann machine.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…To support this bidirectional memory access, memory cells for the synaptic weights must be transposable or accessed row-by-row successively. Transposable memory [22] cells and array structures cause area overhead owing to the additional transistors and metal lines required. In addition, frequent memory access causes an increase in energy for learning a single image.…”
Section: B Post-neuron Spike-referred Stdp (Pr-stdp)mentioning
confidence: 99%
“…To support this bidirectional memory access, memory cells for the synaptic weights must be transposable or accessed row-by-row successively. Transposable memory [22] cells and array structures cause area overhead owing to the additional transistors and metal lines required. In addition, frequent memory access causes an increase in energy for learning a single image.…”
Section: B Post-neuron Spike-referred Stdp (Pr-stdp)mentioning
confidence: 99%
“…There are two methods for training the weight values: onchip and off-chip trainings, and both methods require memristor devices that can be programmed to optimized weight state. In the on-chip training, which is called in situ training, the weight of the synaptic device is updated in the chip, so additional peripheral circuits and better memristor endurance characteristics are needed to apply corresponding teaching signals and bear a number of learning events [32][33][34]. In contrast, in the off-chip training, which is called ex situ training, pre-trained weight values by software algorithms are transferred to individual cells in the synaptic array [35][36][37]; therefore, a memristor with programmable multilevel conductance state is required with accurate tuning method to precisely realize floating-point weight values, but there would be inevitable tuning errors compared with pre-trained weight values [38,39].…”
Section: Introductionmentioning
confidence: 99%
“…Some DNNs, such as some MLPs [ 24 , 25 ], RBMs [ 26 , 27 ], and CNNs [ 28 – 31 ], have been developed as dedicated chips. One of the report uses RBMs for training and AEs for inference [ 32 ]. In addition to these examples of the hardware implementations of DNNs, FPGA implementations of CNNs and RBMs have also been reported in [ 33 – 38 ].…”
Section: Introductionmentioning
confidence: 99%