2021
DOI: 10.48550/arxiv.2102.00035
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MF-Net: Compute-In-Memory SRAM for Multibit Precision Inference using Memory-immersed Data Conversion and Multiplication-free Operators

Abstract: We propose a co-design approach for compute-inmemory inference for deep neural networks (DNN). We use multiplication-free function approximators based on 1 norm along with a co-adapted processing array and compute flow. Using the approach, we overcame many deficiencies in the current art of in-SRAM DNN processing such as the need for digital-toanalog converters (DACs) at each operating SRAM row/column, the need for high precision analog-to-digital converters (ADCs), limited support for multi-bit precision weig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…For Flash-ADC, we use a clocked rail-to-rail comparator as shown in Figure 9(c). A similar in-memory compute of DNN was discussed in details in [35].…”
Section: A Hybrid Digital and Compute-in-memory Acceleratormentioning
confidence: 99%
“…For Flash-ADC, we use a clocked rail-to-rail comparator as shown in Figure 9(c). A similar in-memory compute of DNN was discussed in details in [35].…”
Section: A Hybrid Digital and Compute-in-memory Acceleratormentioning
confidence: 99%