1998
DOI: 10.1137/s003614299427315x
|View full text |Cite
|
Sign up to set email alerts
|

Gradient Method with Retards and Generalizations

Abstract: A generalization of the steepest descent and other methods for solving a large scale symmetric positive definitive system Ax = b is presented. Given a positive integer m, the new iteration is given by x k+1 = x k − λ(x ν(k))(Ax k − b), where λ(x ν(k)) is the steepest descent step at a previous iteration ν(k) ∈ {k, k − 1,. .. , max{0, k − m}}. The global convergence to the solution of the problem is established under a more general framework, and numerical experiments are performed that suggest that some strate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
113
0

Year Published

2006
2006
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 145 publications
(114 citation statements)
references
References 18 publications
1
113
0
Order By: Relevance
“…., where α SD k is the exact steplength given by (1.3). Formula (1.4) was first proposed in Friedlander et al (1999), while the particular choice m = 2 was also investigated in Dai (2003) and Raydan & Svaiter (2002). The analysis in Dai & Fletcher (2005a) shows that if m > n 2 , cyclic SD is likely R-superlinearly convergent.…”
Section: )mentioning
confidence: 99%
See 3 more Smart Citations
“…., where α SD k is the exact steplength given by (1.3). Formula (1.4) was first proposed in Friedlander et al (1999), while the particular choice m = 2 was also investigated in Dai (2003) and Raydan & Svaiter (2002). The analysis in Dai & Fletcher (2005a) shows that if m > n 2 , cyclic SD is likely R-superlinearly convergent.…”
Section: )mentioning
confidence: 99%
“…Other possible choices for the stepsize α k include Dai (2003), Dai & Fletcher (2006), Dai & Yang (2001, 2003, Friedlander et al (1999), Grippo & Sciandrone (2002), Raydan & Svaiter (2002) and Serafini et al (2005). In this paper, we refer to (1.6) as the BB formula.…”
Section: )mentioning
confidence: 99%
See 2 more Smart Citations
“…The BB algorithm has been much studied because of its remarkable improvement over the SD and OM algorithms ( [25,4,17] and references therein) and proofs of its convergence can be found in [20]. However, a complete explanation of why this simple modification of the SD algorithm improves its performance considerably has not yet been found, although it has been suggested that the improvement is connected to its nonmonotonic convergence, as well as to the fact that it does not produce iterates that get trapped in a low dimensional subspace [4].…”
Section: Preliminariesmentioning
confidence: 99%