2018 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2018
DOI: 10.23919/date.2018.8342235
|View full text |Cite
|
Sign up to set email alerts
|

XNOR-RRAM: A scalable and parallel resistive synaptic architecture for binary neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
115
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 184 publications
(115 citation statements)
references
References 12 publications
0
115
0
Order By: Relevance
“…Topology [52] 95.7 FC200-3FC100-FC10 [1] 96.0 MLP [51] 97 NK [53] 97.0 LeNet [54] 97.69 MLP [55] 97.86 ConvPool-2 [35] 98. 25 1/4 MLP [41] 98.4 MLP [56] 98.40 MLP [57] 98.6 NK [58] 98.67 MLP [59] 98.77 FC784-3FC512-FC10 Table 6. Accuracies on the MNIST dataset of non-binary networks related to works reviewed.…”
Section: Source Accuracy (%)mentioning
confidence: 99%
See 2 more Smart Citations
“…Topology [52] 95.7 FC200-3FC100-FC10 [1] 96.0 MLP [51] 97 NK [53] 97.0 LeNet [54] 97.69 MLP [55] 97.86 ConvPool-2 [35] 98. 25 1/4 MLP [41] 98.4 MLP [56] 98.40 MLP [57] 98.6 NK [58] 98.67 MLP [59] 98.77 FC784-3FC512-FC10 Table 6. Accuracies on the MNIST dataset of non-binary networks related to works reviewed.…”
Section: Source Accuracy (%)mentioning
confidence: 99%
“…[61] 86.06 5 conv. and 2 FC [65] 86.78 NK [35] 86.98 C64-MP-2C128-MP-2C256-2FC512-FC10 [51] 87 NK [34] 87.16 AlexNet R1 regularizer [34] 87.30 AlexNet R2 regularizer [38] 87.73 BNN +1 padding [32] 88 BNN 512 channels for FC [38] 88.42 BNN 0 padding [59] 88.47 6 conv. [39] 88.61 NK [1] 89.85 BNN Table 10.…”
Section: Source Accuracy (%) Topology Precisionmentioning
confidence: 99%
See 1 more Smart Citation
“…In each of these works the resistive element is used to perform simple logical operations based on current sensing technique. In [26,27,30,31] several Binary Convolutional Neural Networks (BCNNs) implementations are discussed: they achieve very good results in terms of energy and power, thanks to the intrinsic low power nature of the MTJ and RRAM devices. Reference [28] proposes a BNN design based on SRAM array.…”
Section: Nn Implementations Based On Lim Conceptmentioning
confidence: 99%
“…Previous works on ReRAM based accelerators for BNNs, such as Sun et al (2018b), Tang et al (2017), failed to perform real in-situ computation in the ReRAM. ReRAM arrays are only used as processing engines for low precision multiplication in Sun et al (2018b, Tang et al (2017, and extra logic components are deployed for the nonlinear activation and binary quantization. In this work, we enables the real in-situ computation in ReRAM for BNN processing through a software/hardware codesign approach, including (1) the modified hardware oriented operation flow and (2) the fused activation with CRC array.…”
Section: Fused Activation With Crc Arraymentioning
confidence: 99%