2018
DOI: 10.1016/j.jocs.2018.06.007
|View full text |Cite
|
Sign up to set email alerts
|

Sparse supernodal solver using block low-rank compression: Design, performance and analysis

Abstract: Abstract:This paper presents two approaches using a Block Low-Rank (BLR) compression technique to reduce the memory footprint and/or the time-to-solution of the sparse supernodal solver PaStiX. This flat, non-hierarchical, compression method allows to take advantage of the low-rank property of the blocks appearing during the factorization of sparse linear systems, which come from the discretization of partial differential equations. The first approach, called Minimal Memory, illustrates the maximum memory gain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 29 publications
0
20
0
Order By: Relevance
“…Even though the BLR format has been extensively studied and widely used in numerous applications [1,2,3,4,5,7,12,23,25,27,28,30,31,33], little is known about its numerical behavior in floating-point arithmetic. Indeed, no rounding error analysis has been published for BLR matrix algorithms.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Even though the BLR format has been extensively studied and widely used in numerous applications [1,2,3,4,5,7,12,23,25,27,28,30,31,33], little is known about its numerical behavior in floating-point arithmetic. Indeed, no rounding error analysis has been published for BLR matrix algorithms.…”
mentioning
confidence: 99%
“…Another widely used algorithm is the CUF variant (see, for example, [4,28]), which compresses the entire matrix A and then computes its BLR LU factorization. The CUF algorithm is very similar to the CFU one, only differing in that, at line 15 of Algorithm 4.2, the blocks A ik (and A ki ) are already in LR form.…”
mentioning
confidence: 99%
“…Even though the BLR format achieves a higher theoretical complexity than hierarchical formats, its simplicity and flexibility make it easy to use in the context of a general purpose, algebraic solver [6,33,4,36]. Due to its non-hierarchical nature, the BLR format is particularly efficient on parallel computers [6,35,14,1].…”
mentioning
confidence: 99%
“…• PaStiX BLR [102,103] provides BLR compression techniques to compute the sparse supernodal solver PaStiX, either as a direct solver operating at a lower precision or as a preconditioner. The implementations included in PaStiX BLR can be executed in shared memory parallel platforms.…”
Section: Linear Algebra Software For Compressed Matricesmentioning
confidence: 99%
“…On the other hand, in general, the definition and storage of H-Matrices leads to complex data accesses. This fact promoted the appearance of alternative structures, such as BLR [10,89,103] and lattice H-Matrices [68,116], that trade off slightly higher time and memory costs in exchange for superior simplicity. One asset of these approaches it that they make it easier to exploit parallelism, as they present more regular structures.…”
Section: The Basics Of H-chameleonmentioning
confidence: 99%