2022 International Conference on Field-Programmable Technology (ICFPT) 2022
DOI: 10.1109/icfpt56656.2022.9974324
|View full text |Cite
|
Sign up to set email alerts
|

Mixing Low-Precision Formats in Multiply-Accumulate Units for DNN Training

Abstract: The most compute-intensive stage of deep neural network (DNN) training is matrix multiplication where the multiply-accumulate (MAC) operator is key. To reduce training costs, we consider using low-precision arithmetic for MAC operations. While low-precision training has been investigated in prior work, the focus has been on reducing the number of bits in weights or activations without compromising accuracy. In contrast, the focus in this paper is on implementation details beyond weight or activation width that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
references
References 21 publications
(23 reference statements)
0
0
0
Order By: Relevance