2022
DOI: 10.1002/aisy.202200289
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Weight‐Splitting in Resistive Random Access Memory‐Based Computing‐in‐Memory Macros

Abstract: Deep learning (DL) prevalently applies to many tasks that are used to be done using a set of instructions. The improvement of DL capability is desired, which requires the deep neural network (DNN) to be deeper and larger. The wide use of DL with deep and large DNNs causes an immense workload for hardware, which is expected to continue onward. To support this trend, new hardware that accelerates major DL operations with reduced power consumption is strongly demanded. The current mainstream hardware for DL is ge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
references
References 24 publications
0
0
0
Order By: Relevance