2017
DOI: 10.1109/jssc.2017.2715171
|View full text |Cite
|
Sign up to set email alerts
|

A 41.3/26.7 pJ per Neuron Weight RBM Processor Supporting On-Chip Learning/Inference for IoT Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…In fact, binary stochastic neurons are desired for deep learning networks, but are typically avoided because it is harder to generate random bits in CMOS hardware 77 . Use of this compact neuron that relies on MTJs natural physics to provide stochastic binarization could accelerate computation in custom hardware 78,79 by faster evaluation of BSN function 32 and also encourage algorithmic advancement using BSN.…”
Section: Discussionmentioning
confidence: 99%
“…In fact, binary stochastic neurons are desired for deep learning networks, but are typically avoided because it is harder to generate random bits in CMOS hardware 77 . Use of this compact neuron that relies on MTJs natural physics to provide stochastic binarization could accelerate computation in custom hardware 78,79 by faster evaluation of BSN function 32 and also encourage algorithmic advancement using BSN.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, it seems as [38] is optimized for 16 bit× 16 bit. In general, higher flexibility comes with the price of reduced efficiency [38], [41], [42]. For this reason, most hardware accelerators for DNNs support only a limited number of DNNs very efficiently.…”
Section: ) Quantizationmentioning
confidence: 99%
“…Content may change prior to final publication. [25], [30], [32]- [38], [42], [46], [47], [53], [55], [57], [ [29], [31], [39], [40], [45] → → → ↑ Compression I [49], [51], [53], [55], [57], [ [25], [36]- [38], [48], [49], [51], [57] ↓ ↑ ↓ → Zero-Skipping I [49], [51], [53], [55], [57], [58] ↓ ↑ ↓ → Bit-Serial Processing I [24], [25], [34], [36], [37]…”
Section: ) Voltage Underscalingmentioning
confidence: 99%
“…e RBM is a stochastic generative neural network that can learn the probability distribution from the input datasets. RBMs are applied in dimension reduction [7], classification [8], collaborative filtering [9], feature learning [10], topic modeling [11], radar target automatic recognition [12], chip synthesis [13], and speech recognition [14]. RBMs can be trained by either supervised or unsupervised learning depending on the different tasks.…”
Section: Introductionmentioning
confidence: 99%