2019
DOI: 10.1016/j.parco.2019.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing and improving maximal attainable accuracy in the communication hiding pipelined BiCGStab method

Abstract: Pipelined Krylov subspace methods avoid communication latency by reducing the number of global synchronization bottlenecks and by hiding global communication behind useful computational work. In exact arithmetic pipelined Krylov subspace algorithms are equivalent to classic Krylov subspace methods and generate identical series of iterates. However, as a consequence of the reformulation of the algorithm to improve parallelism, pipelined methods may suffer from severely reduced attainable accuracy in a practical… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 64 publications
0
9
0
1
Order By: Relevance
“…In finite precision arithmetic, Cools et al [19] discussed the the effect of local rounding error propagation on the maximal attainable accuracy of the pipelined CG method and compared it with the classical CG and Chronopoulos-Gear CG [16]. In a later paper, Cools [17] gave a similar discussion for the pipelined BICGSTAB method. Carson et al [12] discussed the stability issues for synchronization-reducing algorithms and presented a methodology for the theoretical analysis of some CG variants.…”
Section: Pipelined Krylov Subspace Methodsmentioning
confidence: 99%
“…In finite precision arithmetic, Cools et al [19] discussed the the effect of local rounding error propagation on the maximal attainable accuracy of the pipelined CG method and compared it with the classical CG and Chronopoulos-Gear CG [16]. In a later paper, Cools [17] gave a similar discussion for the pipelined BICGSTAB method. Carson et al [12] discussed the stability issues for synchronization-reducing algorithms and presented a methodology for the theoretical analysis of some CG variants.…”
Section: Pipelined Krylov Subspace Methodsmentioning
confidence: 99%
“…All Conjugate Gradient variants listed in Table 1 compute a single SPMV in each iteration. However, as indicated by if i ≥ l then # Finalize dot-products (g j,i−l+1 ) 8:…”
Section: Floating Point Operations Per Iterationmentioning
confidence: 99%
“…Although Alg. 1, line 18 indicates that in each iteration i ≥ (2l + 1) a total of (2l + 1) dot products need to be computed, the number of dot product computations can be limited to (l + 1) by exploiting the symmetry of the matrix G i+1 , see expression (8). Since g j,i+1 = g i−l+1,j+l for j ≤ i + 1, only the dot products (z j , z i+1 ) for j = i − l + 2, .…”
Section: Basis Recurrence Relations In Exact Arithmeticmentioning
confidence: 99%
See 1 more Smart Citation
“…Iterative methods are computationally more attractive than the direct methods particularly for large and sparse systems (Freund et al 1992, Dreyer 2009. There are a wide variety of iterative techniques, such as Conjugate Gradient (CG), minimum residual, generalized minimum residual, Bi-conjugate gradient, quasi minimal residual, conjugate gradient squared, Bi-conjugate Gradient Stabilized (BiCGStab) and Chebyshev iterations (Cools 2019). We used the Corrected Bi Conjugate Gradient Stabilized Method (CBiCGStab) for the solutions of equations with unsymmetric coefficient matrices.…”
Section: Introduccionmentioning
confidence: 99%