2021
DOI: 10.2172/1814447
|View full text |Cite
|
Sign up to set email alerts
|

Advances in Mixed Precision Algorithms: 2021 Edition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Until recently, mathematical software was usually implemented in one fixed precision. This paradigm is starting to change [1,6,7,15,21,22,27,28] as we currently witness an expanding use of both high-precision (for enhanced accuracy) and low-precision (for enhanced speed) arithmetic; sometimes, different precisions may even be used for different steps in the same algorithm.…”
Section: Numerical Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Until recently, mathematical software was usually implemented in one fixed precision. This paradigm is starting to change [1,6,7,15,21,22,27,28] as we currently witness an expanding use of both high-precision (for enhanced accuracy) and low-precision (for enhanced speed) arithmetic; sometimes, different precisions may even be used for different steps in the same algorithm.…”
Section: Numerical Algorithmmentioning
confidence: 99%
“…The straightforward implementations, which we denote by ĝ and ĥ respectively, provide backward stable algorithms. However, in IEEE Standard 754 double-precision floating-point arithmetic, computing ĝ ○ ĥ results in the floating-point representation fl(1/3) of 1 3 . Since fl(1/3) ≠ 1 3 = g(h(x)), there is no real input x ′ such that (g ○ h)(x ′ ) = fl(1/3).…”
Section: Introductionmentioning
confidence: 99%
“…To get the full sum for d (3) , one should add the partial sums from M (0,0) , M (0,1) , and M (1,1) .…”
Section: With These Two Techniques the Required Number Of Rows For C ...mentioning
confidence: 99%
“…As fig. 8 showed, even in fp16 precision, the intermediate results of the tensor core are computed with fp32 numbers [14,1]. Given that the accuracy of im2tensor (fp16) is about the same as the mixed (fp16fp32) versions of "naive" and im2tensor, we can deduce that the accuracy is further limited by the storage type rather than the type used for intermediate computations.…”
mentioning
confidence: 96%
See 1 more Smart Citation