2020
DOI: 10.1007/s10589-020-00232-9
|View full text |Cite
|
Sign up to set email alerts
|

Properties of the delayed weighted gradient method

Abstract: The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n × n real Hessian matrix of the qu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 15 publications
1
10
0
Order By: Relevance
“…Some of the properties that DWGM enjoys, established in [2,16] include the non negativity of β k for all k, the monotonic decreasing of { g k 2 } as well as the Q-linear convergence of {g k } to zero when k goes to innity (which implies that {x k } converges to the unique global minimizer of f ), and nite convergence by using A-orthogonality of the gradient vector at the current iteration with all previous gradient vectors.…”
Section: Delayed Weighted Gradient Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of the properties that DWGM enjoys, established in [2,16] include the non negativity of β k for all k, the monotonic decreasing of { g k 2 } as well as the Q-linear convergence of {g k } to zero when k goes to innity (which implies that {x k } converges to the unique global minimizer of f ), and nite convergence by using A-orthogonality of the gradient vector at the current iteration with all previous gradient vectors.…”
Section: Delayed Weighted Gradient Methodsmentioning
confidence: 99%
“…Proceeding Series of the Brazilian Society of Computational and Applied Mathematics. v. 9, n. The theoretical properties of the algorithm are inherited from the original DWGM [2,16]. Note that problem (2) is a convex quadratic optimization problem with hessian given by A T A.…”
Section: Delayed Weighted Gradient Methodsmentioning
confidence: 99%
“…Each of the two DWGM step sizes are calculated sequentially, so the rst step-size information is necessary to calculate the second one. Andreani and Raydan [1] demonstrated several important properties of DWGM, including the nite termination of the method, in exact arithmetics. In short, DWGM can outperform the conjugate gradient method [1,13] and, therefore, it is a candidate method for practical problems.…”
Section: Introductionmentioning
confidence: 99%
“…Andreani and Raydan [1] demonstrated several important properties of DWGM, including the nite termination of the method, in exact arithmetics. In short, DWGM can outperform the conjugate gradient method [1,13] and, therefore, it is a candidate method for practical problems. In this article we develop a two-step gradient method where both step-sizes are simulteneously calculated as optimal solutions of a bidimensional optimization problem.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation