2000
DOI: 10.1002/(sici)1099-1506(200004/05)7:3<99::aid-nla188>3.0.co;2-5
|View full text |Cite
|
Sign up to set email alerts
|

Approximate inverse preconditioning in the parallel solution of sparse eigenproblems

Abstract: A preconditioned scheme for solving sparse symmetric eigenproblems is proposed. The solution strategy relies upon the DACG algorithm, which is a Preconditioned Conjugate Gradient algorithm for minimizing the Rayleigh Quotient. A comparison with the well established ARPACK code shows that when a small number of the leftmost eigenpairs is to be computed, DACG is more efficient than ARPACK. Effective convergence acceleration of DACG is shown to be performed by a suitable approximate inverse preconditioner (AINV).… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2000
2000
2011
2011

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(21 citation statements)
references
References 28 publications
0
21
0
Order By: Relevance
“…Parallel implementations of the basic AINV algorithm have been described in [9,12], with good results on a range of scalar PDE problems from the modeling of diusion and transport phenomena on both structured and unstructured grids. Future work will focus on developing parallel implementations of the SAINV and block SAINV preconditioners for large-scale problems in solid and structural mechanics.…”
Section: Discussionmentioning
confidence: 99%
“…Parallel implementations of the basic AINV algorithm have been described in [9,12], with good results on a range of scalar PDE problems from the modeling of diusion and transport phenomena on both structured and unstructured grids. Future work will focus on developing parallel implementations of the SAINV and block SAINV preconditioners for large-scale problems in solid and structural mechanics.…”
Section: Discussionmentioning
confidence: 99%
“…Much better results can be obtained in some cases with a different value of τ , but we deliberately avoided fine-tuning because we wanted to show that this is a good choice in general. Also, the performance of the algorithm is only moderately affected by the choice of τ , provided that it is not chosen too small or too large; see [12].…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…In the last few years there has been considerable interest in explicit preconditioning techniques based on directly approximating A −1 with a sparse matrix M ; see, e.g., [7], [8], [16], [18], [23], [24], [27], [31], and the recent survey [10]. Sparse approximate inverses have been shown to result in good rates of convergence of the preconditioned iteration (comparable to those obtained with incomplete factorization methods) while being well suited for implementation on vector and parallel architectures; see, e.g., [6], [9], [12], [21].…”
Section: Introductionmentioning
confidence: 99%
“…With the above assumption in mind, we neither deteriorate nor improve the effects of the selected preconditioners on the rate of convergence. Our assumption is justified in applications where the preconditioner matrices can be reused; see, for example, [12] and a discussion of it in [10]. The assumption is also justified in the preconditioner constructing methods that require a priori sparsity patterns for the preconditioner matrices [44,45], where techniques to develop effective sparsity patterns already exist in the literature [23,24,39].…”
mentioning
confidence: 99%