2022
DOI: 10.1093/imanum/drac037
|View full text |Cite
|
Sign up to set email alerts
|

Mixed precision low-rank approximations and their application to block low-rank LU factorization

Abstract: We introduce a novel approach to exploit mixed precision arithmetic for low-rank approximations. Our approach is based on the observation that singular vectors associated with small singular values can be stored in lower precisions while preserving high accuracy overall. We provide an explicit criterion to determine which level of precision is needed for each singular vector. We apply this approach to block low-rank (BLR) matrices, most of whose off-diagonal blocks have low rank. We propose a new BLR LU factor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…However, we mention as an important perspective of this work that the new hybrid variants may be even more successful in a mixed precision context. Indeed, in recent work, we have shown that the BLR LU factors can be stored in mixed precision while preserving the same accuracy [2]. As a result, the mixed precision BLR LU factors are further compressed and can reduce the triangular solve time.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…However, we mention as an important perspective of this work that the new hybrid variants may be even more successful in a mixed precision context. Indeed, in recent work, we have shown that the BLR LU factors can be stored in mixed precision while preserving the same accuracy [2]. As a result, the mixed precision BLR LU factors are further compressed and can reduce the triangular solve time.…”
mentioning
confidence: 99%
“…This storage reduction is translated into a comparable RL forward solve time reduction with a single RHS (1.3× speedup), but not with multiple ones (only 1.1× speedup). In future work it could therefore be promising to combine the use of mixed precision proposed in [2] and the new hybrid variants proposed in this article.…”
mentioning
confidence: 99%