2020
DOI: 10.1109/jssc.2019.2963616
|View full text |Cite
|
Sign up to set email alerts
|

XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
111
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 278 publications
(111 citation statements)
references
References 22 publications
0
111
0
Order By: Relevance
“…Jeloka et al [4] use standard push-rule 6T SRAM cells as TCAM to reduce the cell area, For IM-DP, several novel bit-cells have been proposed. XNOR-SRAM [9] employs a 12T bit-cell to compute MAC based on the resistive voltage divider formed by access transistors. Yu et al [10] use an 8T bit-cell to support current-mode accumulation.…”
Section: B Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Jeloka et al [4] use standard push-rule 6T SRAM cells as TCAM to reduce the cell area, For IM-DP, several novel bit-cells have been proposed. XNOR-SRAM [9] employs a 12T bit-cell to compute MAC based on the resistive voltage divider formed by access transistors. Yu et al [10] use an 8T bit-cell to support current-mode accumulation.…”
Section: B Related Workmentioning
confidence: 99%
“…Besides TCAM, deep neural networks (DNNs) also cost considerable delay and power using traditional computing paradigms due to frequent memory fetch to perform the dot product (also called multiply-and-accumulate, or MAC) operation. CIM solutions for DNNs [9]- [13] can improve the throughput and energy efficiency by performing massively parallel MAC operations inside the memory array, eliminating costly data transfer.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, on-chip training is also possible with SRAM based CIM architectures [19]. Most of today's SRAM CIM prototypes [20][21][22] targeted at demonstrating inference or on-chip training functionality or improving performance, while the security challenges in the SRAM-CIM designs are largely unexplored.…”
Section: Sram-based Cimmentioning
confidence: 99%
“…In‐memory computing is another interesting topic to resolve the bottleneck of off‐chip traffic of CNN computation. There have been some published works [44, 45] that support existing neural networks with a limited set of applications. This approach uses different techniques and is in parallel to the sparse CNN accelerator research.…”
Section: Related Workmentioning
confidence: 99%