2022
DOI: 10.1109/tcsi.2022.3184115
|View full text |Cite
|
Sign up to set email alerts
|

PL-NPU: An Energy-Efficient Edge-Device DNN Training Processor With Posit-Based Logarithm-Domain Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Although posit arithmetic was designed to have similar circuitry to the floating-point format, the variable length of the fields and the signed hidden bit of the fraction requires redesigning some of the logic when implementing posit operators. However, such an effort might be compensated by the benefits of using posit arithmetic-its higher accuracy, when compared with standard floating-point, can reduce the bitwidth of the data and operations of scientific computations without sacrificing the accuracy of the results, with all the benefits this entails at the hardware level [25].…”
Section: A Posit Notationmentioning
confidence: 99%
“…Although posit arithmetic was designed to have similar circuitry to the floating-point format, the variable length of the fields and the signed hidden bit of the fraction requires redesigning some of the logic when implementing posit operators. However, such an effort might be compensated by the benefits of using posit arithmetic-its higher accuracy, when compared with standard floating-point, can reduce the bitwidth of the data and operations of scientific computations without sacrificing the accuracy of the results, with all the benefits this entails at the hardware level [25].…”
Section: A Posit Notationmentioning
confidence: 99%
“…Many fields have benefited from the posit data format since its emergence, including weather forecasts [4], graph processing [5] and deep learning [6]. For deep learning applications, in particular, prior arts have optimized deep neural networks (DNNs) using posit data types for efficient inference [7] [8] and training [9] [10].…”
Section: Introductionmentioning
confidence: 99%