2023
DOI: 10.1088/2634-4386/acbab9
|View full text |Cite
|
Sign up to set email alerts
|

Hadamard product-based in-memory computing design for floating point neural network training

Abstract: Deep neural networks (DNNs) are one of the key fields of machine learning. It requires considerable computational resources for cognitive tasks. As a novel technology to perform computing inside/near memory units, in-memory computing (IMC) significantly improves computing efficiency by reducing the need for repetitive data transfer between the processing and memory units. However, prior IMC designs mainly focus on the acceleration for DNN inference. DNN training with the IMC hardware has rarely been proposed. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 55 publications
0
1
0
Order By: Relevance
“…In Fan et al [1], the authors developed efficient in-memory circuits to perform both floating-point vector-matrix multiplication and vector Hadamard product (element-wise product) operations that are necessary for neural network training. The proposed IMC design, validated on 28 nm technology design kit, enables both neural network inference and training at the edge, and improves the data density by 3.5 × compared to previous designs with negligible precision loss compared to the ideal BFloat16 computing.…”
mentioning
confidence: 99%
“…In Fan et al [1], the authors developed efficient in-memory circuits to perform both floating-point vector-matrix multiplication and vector Hadamard product (element-wise product) operations that are necessary for neural network training. The proposed IMC design, validated on 28 nm technology design kit, enables both neural network inference and training at the edge, and improves the data density by 3.5 × compared to previous designs with negligible precision loss compared to the ideal BFloat16 computing.…”
mentioning
confidence: 99%