2020
DOI: 10.1007/978-3-030-39081-5_15
|View full text |Cite
|
Sign up to set email alerts
|

A Limited Memory Gradient Projection Method for Box-Constrained Quadratic Optimization Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…In particular, we study if the LM idea can be borrowed in order to generate m approximations of the eigenvalues of the objective function Hessian matrix restricted to those indices corresponding to the inactive components at the solution. A first attempt to achieve this target has been carried out in [28,30] for GP methods equipped with a line search along the feasible direction. In the following we propose a detailed analysis for the GP method with line search along the projection arc, that deserves an ad hoc study due to its intrinsic differences with respect to the case in which the line search along the feasible direction is used.…”
Section: Generalized Limited Memory Steplength Selection Rulesmentioning
confidence: 99%
See 3 more Smart Citations
“…In particular, we study if the LM idea can be borrowed in order to generate m approximations of the eigenvalues of the objective function Hessian matrix restricted to those indices corresponding to the inactive components at the solution. A first attempt to achieve this target has been carried out in [28,30] for GP methods equipped with a line search along the feasible direction. In the following we propose a detailed analysis for the GP method with line search along the projection arc, that deserves an ad hoc study due to its intrinsic differences with respect to the case in which the line search along the feasible direction is used.…”
Section: Generalized Limited Memory Steplength Selection Rulesmentioning
confidence: 99%
“…In general, we can not ensure that for m + 1 successive iterations the sets F k+j , j = 0, 1, … , m are all the same and hence, the matrix T F (k+1,k+m) , whose eigenvalues are approximations of those of A F (k+1,k+m) ,F (k+1,k+m) , can not be simply written as in (30). Indeed, in view of (27), the matrix T F (k+1,k+m) has the following form where In practice, the computation of the matrix E F (k+1,k+m) is not affordable and, as a consequence, along the iterative procedure, it is only possible to approximate the eigenvalues of T F (k+1,k+m) and we now investigate how to achieve this goal.…”
Section: Practical Approximation Of the Eigenvalues Of A F (K+1k+m) F...mentioning
confidence: 99%
See 2 more Smart Citations