Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region 2020
DOI: 10.1145/3368474.3368479
|View full text |Cite
|
Sign up to set email alerts
|

Effect of Mixed Precision Computing on H-Matrix Vector Multiplication in BEM Analysis

Abstract: Hierarchical Matrix (H-matrix) is an approximation technique which splits a target dense matrix into multiple submatrices, and where a selected portion of submatrices are low-rank approximated. The technique substantially reduces both time and space complexity of dense matrix vector multiplication, and hence has been applied to numerous practical problems.In this paper, we aim to accelerate the H-matrix vector multiplication by introducing mixed precision computing, where we employ both binary64 (FP64) and bin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 31 publications
1
3
0
Order By: Relevance
“…We thus end up with a partitioning of the SVD as defined by (2.17)- (2.18). We note that this partitioning is similar to the Method 3 proposed in [30]. Our analysis justifies the use of this partitioning and gives a precise rule to define the p groups depending on the singular values and on the precisions.…”
Section: Mixed Precision Low-rank Approximationssupporting
confidence: 59%
“…We thus end up with a partitioning of the SVD as defined by (2.17)- (2.18). We note that this partitioning is similar to the Method 3 proposed in [30]. Our analysis justifies the use of this partitioning and gives a precise rule to define the p groups depending on the singular values and on the precisions.…”
Section: Mixed Precision Low-rank Approximationssupporting
confidence: 59%
“…Thus, performance enhancements can be obtained by performing compute-intensive portions in FP32, provided the error incurred is within reason. Similar methods have been successfully used in other domains of computational science as well 40,41 . For the class of high dimensional dynamical systems considered in this paper, it is proposed to evaluate the nested summations, arising out of the coupling terms, involved in the function evaluation in FP32 (see Eq.…”
Section: Utilizing Mixed Fp32-fp64 Computationsmentioning
confidence: 99%
“…Note that this approach is applicable not only to the SVD but also to other types of rank-revealing decompositions, such as QR factorization with column pivoting. Ooi et al (2020) propose three different methods to introduce mixed precision arithmetic in the product of a low-rank matrix with a vector. Their method 3 is similar to the representation (12.1), which they use with fp64 and fp32 arithmetics.…”
Section: Svd With Rapidly Decaying Singular Valuesmentioning
confidence: 99%
“…This can be the case, for example, of SVDs or rank-revealing factorizations. In fact, the mixed precision truncated SVD approaches described in Section 12.2 (Amestoy et al 2021a, Ooi et al 2020 are precisely based on this property: rounding errors introduced by converting singular vectors to lower precision are demagnified by the associated singular value, and so the precision of each vector should be selected based on its associated singular value.…”
Section: At the Column Level (Or Equivalently At The Row Level)mentioning
confidence: 99%