2005
DOI: 10.1016/j.cam.2004.09.024
|View full text |Cite
|
Sign up to set email alerts
|

Relaxation strategies for nested Krylov methods

Abstract: There are classes of linear problems for which the matrix-vector product is a time consuming operation because an expensive approximation method is required to compute it to a given accuracy. In recent years different authors have investigated the use of, what is called, relaxation strategies for various Krylov subspace methods. These relaxation strategies aim to minimize the amount of work that is spent in the computation of the matrix-vector product without compromising the accuracy of the method or the conv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
19
0
1

Year Published

2006
2006
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 20 publications
2
19
0
1
Order By: Relevance
“…The amount of work in the inversions of the linear systems, measured as the total number of applications of the ILU preconditioner, is reduced by about 30 to 40 percent. This is comparable with reductions that are seen in applications of inexact Krylov methods in other areas, see for a more detailed discussion [40,Section 3]. For the purpose of illustration we have included in Figure 6.2 and Figure 6.3 a visual representation of the results for τ = 1/10 and τ = 1/100 for both strategies.…”
Section: Numerical Experimentssupporting
confidence: 73%
“…The amount of work in the inversions of the linear systems, measured as the total number of applications of the ILU preconditioner, is reduced by about 30 to 40 percent. This is comparable with reductions that are seen in applications of inexact Krylov methods in other areas, see for a more detailed discussion [40,Section 3]. For the purpose of illustration we have included in Figure 6.2 and Figure 6.3 a visual representation of the results for τ = 1/10 and τ = 1/100 for both strategies.…”
Section: Numerical Experimentssupporting
confidence: 73%
“…Van den Eshof et al [25] showed that fixing ε in is nearly optimal if relaxation is not applied. However, even 0.1 (10%) residual error can cause a significant number of inner iterations for large-scale problems.…”
Section: Inner Stopping Criteriamentioning
confidence: 99%
“…In addition to more efficient solutions, the inner-outer scheme prevents numerical errors that arise because of the deviations of the computed residual from the true residual by significantly decreasing the number of outer iterations. This is because the "residual gap," i.e., the difference between the true and computed residuals, increases with the number of iterations [25]. Another benefit of the reduction in iteration counts appears when the iterative solutions are performed with the generalized minimal residual (GMRES) algorithm, which is usually an optimal method for EFIE in terms of the processing time [14,18].…”
Section: Iterative Preconditioning Based On Amlfmamentioning
confidence: 99%
See 1 more Smart Citation
“…Baseados em [19], estratégias semelhantes foram aplicadas com sucesso na solução de problemas de difusão heterogênea usando decomposição de domínio [21], como precondicionadores de problemas de difusão de radiação [135], em problemas de eletromagnetismo [79], em cromodinâmica quântica [40] e em modelos de circulação oceânica de fluxos barotrópicos estáveis [130]. Passos significativos, em direção a uma explicação teórica do comportamento observado acima, foram propostos em [112], [129], e, mais recentemente, em [52].…”
Section: Inexatosunclassified