2002
DOI: 10.1016/s0895-7177(02)00266-2
|View full text |Cite
|
Sign up to set email alerts
|

Smooth and adaptive gradient method with retards

Abstract: International audienceThe gradient method with retards (GMR) is a nonmonotone iterative method recently developed to solve large, sparse, symmetric, and positive definite linear systems of equations. Its performance depends on the retard parameter $\overline{m}$. The larger the $\overline{m}$, the faster the convergence, but also the faster the loss of precision is observed in the intermediate computations of the algorithm. This loss of precision is mainly produced by the nonmonotone behavior of the norm of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…A closer examination of AOA and MGC reveals that they are more stable than SDC. Such feature may contribute to the problem of loss of precision [24]. The new methods with alignment present several advantages over the Krylov subspace methods.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…A closer examination of AOA and MGC reveals that they are more stable than SDC. Such feature may contribute to the problem of loss of precision [24]. The new methods with alignment present several advantages over the Krylov subspace methods.…”
Section: Discussionmentioning
confidence: 99%
“…Further insight into the plots can be gained by observing the oscillating behavior, which reveals that SDA usually has large magnitude of oscillation, while MGA is the smoothest one. It is known that the oscillation of a convergence curve is closely related to the numerical stability [24]. In view of the convergence performance and the stability behavior for the three aligned methods, the use of the MGA step is more recommended than the SDA step.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…The DWGM can be seen as a variant of the parallel tangent method (PARTAN) in the sense that it uses two line searches in the iteration, with information of previous points to accelerate the gradient method [18,27,28]. In [24] a smoothing technique is introduced to prevent the so called zigzagging behavior on the sequence of the gradient norms, which is characteristic of CG methods.…”
Section: Introductionmentioning
confidence: 99%
“…where A ∈ R n×n is symmetric positive denite (SPD) and b ∈ R n several methodologies were proposed [4,7,9,12,14,15,18]. Gradient methods play a key role in this matter.…”
Section: Introductionmentioning
confidence: 99%