2019
DOI: 10.1016/j.parco.2019.102548
|View full text |Cite
|
Sign up to set email alerts
|

Parallelization and scalability analysis of inverse factorization using the chunks and tasks programming model

Abstract: We present three methods for distributed memory parallel inverse factorization of block-sparse Hermitian positive definite matrices. The three methods are a recursive variant of the AINV inverse Cholesky algorithm, iterative refinement, and localized inverse factorization, respectively. All three methods are implemented using the Chunks and Tasks programming model, building on the distributed sparse quad-tree matrix representation and parallel matrix-matrix multiplication in the publicly available Chunks and T… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…The Chunks and Tasks Matrix Library has been indispensable in the development, implementation, and analysis of a number of novel parallel sparse matrix algorithms for distributed memory systems, including a general purpose parallel sparse matrixmatrix multiply efficiently exploiting locality of nonzero matrix entries [18], a sparse approximate matrix-matrix multiply for matrices with decay [2,3], localized inverse factorization [19,4], a new communication-avoiding divide and conquer method for inverse factorization of symmetric positive definite matrices, and density matrix purification [15].…”
mentioning
confidence: 99%
“…The Chunks and Tasks Matrix Library has been indispensable in the development, implementation, and analysis of a number of novel parallel sparse matrix algorithms for distributed memory systems, including a general purpose parallel sparse matrixmatrix multiply efficiently exploiting locality of nonzero matrix entries [18], a sparse approximate matrix-matrix multiply for matrices with decay [2,3], localized inverse factorization [19,4], a new communication-avoiding divide and conquer method for inverse factorization of symmetric positive definite matrices, and density matrix purification [15].…”
mentioning
confidence: 99%