2019
DOI: 10.1137/18m1196285
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Linear Solvers Based on Enlarged Krylov Subspaces with Dynamic Reduction of Search Directions

Abstract: Krylov methods are widely used for solving large sparse linear systems of equations. On distributed architectures, their performance is limited by the communication needed at each iteration of the algorithm. In this paper, we study the use of so-called enlarged Krylov subspaces for reducing the number of iterations, and therefore the overall communication, of Krylov methods. In particular, we consider a reformulation of the Conjugate Gradient method using these enlarged Krylov subspaces: the enlarged Conjugate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…Then the unknown vector x is split into t vectors, X 1≤i≤t , such that we can still retrieve the original vector by summing them: x = T i=1 X (i) . The parameter t is called the enlarging factor, and the method is in practice not sensitive to the choice of the splitting scheme [16]. Following these considerations, the method can be derived in a similar way to the standard CG solver, replacing the residuals and search directions by N × t matrices, and the optimal step which is a scalar in CG becomes a t × t matrix.…”
Section: Enlarged Conjugate Gradient Solvermentioning
confidence: 99%
See 1 more Smart Citation
“…Then the unknown vector x is split into t vectors, X 1≤i≤t , such that we can still retrieve the original vector by summing them: x = T i=1 X (i) . The parameter t is called the enlarging factor, and the method is in practice not sensitive to the choice of the splitting scheme [16]. Following these considerations, the method can be derived in a similar way to the standard CG solver, replacing the residuals and search directions by N × t matrices, and the optimal step which is a scalar in CG becomes a t × t matrix.…”
Section: Enlarged Conjugate Gradient Solvermentioning
confidence: 99%
“…The rate of convergence of the ECG is given by the following result demonstrated in [16]: if x k is the approximate solution given by the Enlarged Conjugate Gradient with an enlarging factor t at step k, and x * the true solution then we have…”
Section: Enlarged Conjugate Gradient Solvermentioning
confidence: 99%
“…Reducing communication in iterative methods is a challenging topic, with an additional difficulty to consider, the fact that the convergence of iterative methods depends on the spectral properties of the matrices involved. Approaches such as s-step methods (see, e.g., [60], [61] and the references therein) or enlarged Krylov methods [62], [63] are actively investigated.…”
Section: Algorithms Minimizing Data Transfermentioning
confidence: 99%
“…Approaches such as s-step methods (e.g. [60,61] and the references therein) or enlarged Krylov methods [62,63] are actively investigated.…”
Section: Algorithms Minimizing Data Transfermentioning
confidence: 99%
“…The iterative solution to large, sparse linear systems of the form 𝐴𝑥 = 𝑏 often requires many sparse matrix-vector multiplications and costly collective communication in the form of inner products; this is the case with the conjugate gradient (CG) method, and with Krylov methods in general. In this paper, we consider so-called enlarged Krylov methods [15], wherein block vectors are introduced to improve convergence, thereby reducing the amount of collective communication in exchange for denser point-to-point communication in the sparse matrix-block vector multiplication.…”
Section: Introductionmentioning
confidence: 99%