2018 IEEE International Conference on Rebooting Computing (ICRC) 2018
DOI: 10.1109/icrc.2018.8638604
|View full text |Cite
|
Sign up to set email alerts
|

SNRA: A Spintronic Neuromorphic Reconfigurable Array for In-Circuit Training and Evaluation of Deep Belief Networks

Abstract: In this paper, a spintronic neuromorphic reconfigurable Array (SNRA) is developed to fuse together power-efficient probabilistic and in-field programmable deterministic computing during both training and evaluation phases of restricted Boltzmann machines (RBMs). First, probabilistic spin logic devices are used to develop an RBM realization which is adapted to construct deep belief networks (DBNs) having one to three hidden layers of size 10 to 800 neurons each. Second, we design a hardware implementation for t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…While there has been significant progress in advancing customized silicon DNN hardware (ASICs and FPGAs) [5,9] to improve computational throughput, scalability, and efficiency, their performance (speed and energy efficiency) are fundamentally limited by the underlying electronic components. Even with the recent progress of integrated analog signal processors in accelerating DNNs systems which focus on accelerating matrix multiplication, such as Vector Matrix Multiplying module (VMM) [20], mixed-mode Multiplying-Accumulating unit (MAC) [1,12,27], resistive random access memory (RRAM) based MAC [2,7,8,26,28], etc., the parallelization are still highly limited.…”
Section: Introductionmentioning
confidence: 99%
“…While there has been significant progress in advancing customized silicon DNN hardware (ASICs and FPGAs) [5,9] to improve computational throughput, scalability, and efficiency, their performance (speed and energy efficiency) are fundamentally limited by the underlying electronic components. Even with the recent progress of integrated analog signal processors in accelerating DNNs systems which focus on accelerating matrix multiplication, such as Vector Matrix Multiplying module (VMM) [20], mixed-mode Multiplying-Accumulating unit (MAC) [1,12,27], resistive random access memory (RRAM) based MAC [2,7,8,26,28], etc., the parallelization are still highly limited.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, matrix multiplication is just one side of the coin in stochastic neural networks. Stochastic sampling forms the other vital operation aligning with the stochastic computing architecture inherent to generative neural networks. ,, …”
mentioning
confidence: 99%