2003
DOI: 10.1137/1.9780898718003
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Methods for Sparse Linear Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

27
9,598
1
289

Year Published

2004
2004
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9,752 publications
(9,915 citation statements)
references
References 140 publications
27
9,598
1
289
Order By: Relevance
“…We refer the reader to e.g. [29] for a general introduction on Krylov subspace methods and to [29,Section 10] and [25,Section 9.4] for a review on flexible methods. The minimum residual norm GMRES method [26] has been extended by Saad [23] to allow variable preconditioning.…”
Section: General Settingmentioning
confidence: 99%
See 1 more Smart Citation
“…We refer the reader to e.g. [29] for a general introduction on Krylov subspace methods and to [29,Section 10] and [25,Section 9.4] for a review on flexible methods. The minimum residual norm GMRES method [26] has been extended by Saad [23] to allow variable preconditioning.…”
Section: General Settingmentioning
confidence: 99%
“…When non variable preconditioning is considered, the full GMRES method [26] is often chosen for the solution of non-symmetric or non-Hermitian linear systems because of its robustness and its minimum residual norm property [25]. Nevertheless to control both the memory requirements and the computational cost of the orthogonalization scheme, restarted GMRES is preferred; it corresponds to a scheme where the maximal dimension of the approximation subspace is fixed.…”
Section: Introductionmentioning
confidence: 99%
“…For clarity of presentation, we assume that the partitions are of equal size m and that each overlap is of size τ . Thus, we can rewrite (1) as two linear systems (4) (5) where we choose the adjustment vector y such that the solution of the lower part of (4) coincides with the upper part of (5), in other words, (6) Let us assume, for now, that each overlapped partition is nonsingular, and that (7) Thus, using (4), (5) and (6) we obtain the balance system (8) (9) (10) Notice that once the balance system (8) is solved for y, linear systems (4) and (5) can be solved independently in parallel. Since the coefficient matrix M is not available explicitly, we use a modified iterative method to solve the balance system (8), where we compute the residual and matrix-vector products as described below.…”
Section: B Sparse Matrix Solvermentioning
confidence: 99%
“…Pardiso [6], MUMPS [7], and SuperLU [8] belong to the direct methods. Conjugate gradient [9], multigrid method [10], and streaming multigrid solver [11] are the commonly used iterative solvers. However, due to direct solvers' poor scalability, they are not appropriate for large linear systems, while iterative solvers are not so robust and they usually take a great number of iterations to converge.…”
Section: Introductionmentioning
confidence: 99%
“…The diagonal translation operator T L (k, r M ), the weighting factor W (k θ ), and the probe correction coefficientP(k,r M ) are combined to form the coupling matrix C . The given set of linear equations is solved using the Generalized Minimum Residual Solver (GMRES) [27] in a least mean square sense (LMS) [28] as…”
Section: Near-field Error Analysismentioning
confidence: 99%