2021
DOI: 10.1109/tc.2020.2985971
|View full text |Cite
|
Sign up to set email alerts
|

Evaluations on Deep Neural Networks Training Using Posit Number System

Abstract: With the increasing size of Deep Neural Network (DNN) models, the high memory space requirements and computational complexity have become an obstacle for efficient DNN implementations. To ease this problem, using reduced-precision representations for DNN training and inference has attracted many interests from researchers. This paper first proposes a methodology for training DNNs with the posit arithmetic, a type-3 universal number (Unum) format that is similar to the floating point(FP) but has reduced precisi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(20 citation statements)
references
References 23 publications
0
20
0
Order By: Relevance
“…This is particularly problematic when the model weights are updated with low-precision posits, since they do not have enough resolution for small numbers. As suggested in [17], using 16-bit posits for the optimizer and loss is usually enough to allow models to train with low-precision posits. With this observation in mind, this model was trained with a different precision for the optimizer and loss, while using posit(8, 2) everywhere else (see Table 4).…”
Section: Positmentioning
confidence: 99%
See 1 more Smart Citation
“…This is particularly problematic when the model weights are updated with low-precision posits, since they do not have enough resolution for small numbers. As suggested in [17], using 16-bit posits for the optimizer and loss is usually enough to allow models to train with low-precision posits. With this observation in mind, this model was trained with a different precision for the optimizer and loss, while using posit(8, 2) everywhere else (see Table 4).…”
Section: Positmentioning
confidence: 99%
“…Later, in [14,15], a FCNN was trained for MNIST and Fashion MNIST using {16, 32}-bit posits. In [16,17], Convolutional Neural Networks (CNNs) were trained using a mix of {8, 16}-bit posits, but still relying on floats for the first epoch and layer computations. More recently, in [18], a CNN was trained for CIFAR-10 but using only {16, 32}-bit posits.…”
Section: Introductionmentioning
confidence: 99%
“…Another very promising alternative to IEEE 32-bit Floating-point standard is the posit TM number system, proposed by Gustafson [19]. This format has been proven to match single precision accuracy performance with only 16 bits used for the representation [9,12,16,17,24]. Furthermore, the first hardware implementations of this novel type are very promising in terms of energy consumption and area occupation [10,20,28].…”
Section: Introductionmentioning
confidence: 99%
“…It uses a regime field to further scale the exponent and thus can provide much larger dynamic range than the floating-point formats. This can benefit many applications, such as deep learning training [3], where a large dynamic range is expected from the numeric format. In addition, posit number format uses tapered precision, and thus its number distribution is non-uniformed.…”
Section: Introductionmentioning
confidence: 99%