2018
DOI: 10.1137/17m1128538
|View full text |Cite
|
Sign up to set email alerts
|

A Two-Phase Gradient Method for Quadratic Programming Problems with a Single Linear Constraint and Bounds on the Variables

Abstract: We propose a gradient-based method for quadratic programming problems with a single linear constraint and bounds on the variables. Inspired by the GPCG algorithm for boundconstrained convex quadratic programming [J.J. Moré and G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases until convergence: an identification phase, which performs gradient projection iterations until either a candidate active set is identified or no reasonable progress is made, and an unconstrained minimization… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…In order to verify the above conclusions on a more exhaustive set of large-scale problems, we used the software package downloadable at http://www.dimat.unina2.it/diserafino/dds sw.htm for randomly generating box-constrained quadratic problems. Following the procedure proposed in [32,33], the software allows to generate test problems with different size, spectral properties and number of active constraints at the solution. We generated a dataset of 108 strictly convex QP problems with nondegenerate solutions, splitted into three groups of increasing size: n = 15000, 20000, 25000; for each group, 36 problems are generated by considering three values for the number na of active constraints at the solution, na ≈ 0.1 • n, 0.5 • n, 0.9 • n, three values for the condition number κ(A) of the Hessian matrix A, κ(A) = 10 4 , 10 5 , 10 6 , and four levels of near-degeneracy, obtained by setting the positive Lagrangian multipliers β * i , i = 1, .…”
Section: Numerical Results On Large-scale Box-constrained Qp Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to verify the above conclusions on a more exhaustive set of large-scale problems, we used the software package downloadable at http://www.dimat.unina2.it/diserafino/dds sw.htm for randomly generating box-constrained quadratic problems. Following the procedure proposed in [32,33], the software allows to generate test problems with different size, spectral properties and number of active constraints at the solution. We generated a dataset of 108 strictly convex QP problems with nondegenerate solutions, splitted into three groups of increasing size: n = 15000, 20000, 25000; for each group, 36 problems are generated by considering three values for the number na of active constraints at the solution, na ≈ 0.1 • n, 0.5 • n, 0.9 • n, three values for the condition number κ(A) of the Hessian matrix A, κ(A) = 10 4 , 10 5 , 10 6 , and four levels of near-degeneracy, obtained by setting the positive Lagrangian multipliers β * i , i = 1, .…”
Section: Numerical Results On Large-scale Box-constrained Qp Problemsmentioning
confidence: 99%
“…where ϕ(x (k) ) is the vector with the entries defined in (33). As in [34], given a set P of n p problems, for each problem p we consider the ratio r p,s of the computing time of a solver s versus the best time of all the solvers (performance ratio) and, for each solver s and for θ ≥ 1 define…”
Section: Numerical Results On Large-scale Box-constrained Qp Problemsmentioning
confidence: 99%
“…The second condition in (12) can be achieved by using any constrained minimization algorithm. We note that, for the restoration problems considered in this work, gradient-projection methods, such as those in [10,23,37], are suited to the solution of the inner problems (11). Indeed, numerical experiments have shown that very low accuracy is required in practice in the solution of the inner problems; furthermore, the computational cost per iteration of gradient projection methods is modest when low-cost algorithms for the projection onto the feasible set are available.…”
Section: Irn-based Inexact Minimization Methodsmentioning
confidence: 99%
“…The optimization problem in Equation (17) is an LCQP problem that can be solved by off-the-shelf toolbox [35,36,40]. Here, we employ the MATLAB's quadprog solver to implement SANSF, in which the computational complexity is on the order of O(8P 3 ) real operations [41].…”
Section: Sparsity-aware Noise Subspace Fittingmentioning
confidence: 99%