2015
DOI: 10.1137/120903476
|View full text |Cite
|
Sign up to set email alerts
|

Improving Multifrontal Methods by Means of Block Low-Rank Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
175
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 158 publications
(175 citation statements)
references
References 30 publications
0
175
0
Order By: Relevance
“…The Intel MKL 2017 is used for BLAS and SVD kernels. The RRQR kernel is issued from the BLR-MUMPS solver [20], and is an extension of the block rank-revealing QR factorization subroutines from LAPACK 3.6.0 (xGEQP3) to stop the factorization when the precision is reached.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The Intel MKL 2017 is used for BLAS and SVD kernels. The RRQR kernel is issued from the BLR-MUMPS solver [20], and is an extension of the block rank-revealing QR factorization subroutines from LAPACK 3.6.0 (xGEQP3) to stop the factorization when the precision is reached.…”
Section: Methodsmentioning
confidence: 99%
“…Block Low-Rank compression has been investigated for dense matrices [18,19], and for sparse linear systems when using a multifrontal method [20,21]. Considering that these approaches are similar to the current study, a detailed comparison will be described in Section 6.…”
Section: Introductionmentioning
confidence: 92%
See 2 more Smart Citations
“…To solve the resulting linear system we use the MUMPS direct factorisation solver [44] interfaced from PETSc [42].…”
Section: Tangent Linear Modelmentioning
confidence: 99%