1999
DOI: 10.1016/s0168-9274(98)00118-4
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study of sparse approximate inverse preconditioners

Abstract: A number of recently proposed preconditioning techniques based on sparse approximate inverses are considered. A description of the preconditioners is given, and the results of an experimental comparison performed on one processor of a Cray C98 vector computer using sparse matrices from a variety of applications are presented. A comparison with more standard preconditioning techniques, such as incomplete factorizations, is also included. Robustness, convergence rates, and implementation issues are discussed.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
201
0
4

Year Published

1999
1999
2016
2016

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 227 publications
(208 citation statements)
references
References 68 publications
3
201
0
4
Order By: Relevance
“…More recently, sparse approximate inverses have become an interesting approach to precondition conjugate gradient iterations [13,14]. Contrary to incomplete factorizations, the computation of approximate inverses can be parallelized in a straightforward way.…”
Section: Incomplete Cholesky Factorizationsmentioning
confidence: 99%
“…More recently, sparse approximate inverses have become an interesting approach to precondition conjugate gradient iterations [13,14]. Contrary to incomplete factorizations, the computation of approximate inverses can be parallelized in a straightforward way.…”
Section: Incomplete Cholesky Factorizationsmentioning
confidence: 99%
“…Even for symmetric positive definite matrices, existence of the standard IC factorization is guaranteed only for some special classes of matrices (see, e.g., [14]). In the symmetric positive definite case, variants of IC have been developed to avoid ill-conditioning and breakdown (see, e.g., [15,16]). Nevertheless, usually these modifications are expensive and introduce additional parameters to be chosen in the algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…For a given square matrix A, there exist several proposals for constructing robust sparse inverse approximations which are based on optimization techniques, mainly based on minimizing the Frobenius norm of the residual (I − XA) over a set P of matrices with a certain sparsity pattern; see e.g., [5,6,15,[17][18][19][20][21][22]. An advantage of these approximate inverse preconditioners is that the process of building them, as well as applying them, is well suited for parallel platforms.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore it is necessary to first compress the force-displacement matrix. We note that there have been considerable efforts in directly approximating an inverse matrix using a sparse matrix [32][33][34]. However they are developed to obtain better pre-conditioners for fast convergence.…”
Section: Clustering Methodsmentioning
confidence: 99%