2013
DOI: 10.1007/s10957-013-0323-7
|View full text |Cite
|
Sign up to set email alerts
|

Addressing Rank Degeneracy in Constraint-Reduced Interior-Point Methods for Linear Optimization

Abstract: In earlier work (Tits et al., SIAM J. Optim., 17(1): 119-146, 2006; Winternitz et al., COAP, 51(3):1001-1036, the present authors and their collaborators proposed primal-dual interior-point (PDIP) algorithms for linear optimization that, at each iteration, use only a subset of the (dual) inequality constraints in constructing the search direction. For problems with many more variables than constraints in primal form, this can yield a major speedup in the computation of search directions. However, in order for … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…Applying regularization to address rank-deficiency of the normal matrix due to constraint reduction was first considered in [35], in the context of linear optimization. There a similar regularization as in [9,25] is applied, while the scheme lets the regularization die out as a solution to the optimization problem is approached, to preserve fast local convergence.…”
Section: A Regularized Constraint-reduced Mpc Algorithmmentioning
confidence: 99%
See 4 more Smart Citations
“…Applying regularization to address rank-deficiency of the normal matrix due to constraint reduction was first considered in [35], in the context of linear optimization. There a similar regularization as in [9,25] is applied, while the scheme lets the regularization die out as a solution to the optimization problem is approached, to preserve fast local convergence.…”
Section: A Regularized Constraint-reduced Mpc Algorithmmentioning
confidence: 99%
“…First, the stopping criterion is a variation on that of [18,34], involving both λ and [λ] + instead of only λ; in fact the latter will fail when the parameter λ max (see (26)- (27)) is not large enough and may fail when second order sufficient conditions are not satisfied, while we prove below (Theorem 1(iv)) that the new criterion is eventually satisfied indeed, in that the iterate (x, λ) converges to a solution (even if it is not unique), be it on a mere subsequence. Second, our update formula for the regularization parameter ̺ in Step 2 improves on that in [35] (̺ + = min{χ, χ max } in the notation of this paper, where χ max is a userdefined constant) as it fosters a "smooth" evolution of W from the initial value of H + R, with R specified by the user, at a rate no faster than that required for local q-quadratic convergence. And third, R 0 should be selected to compensate for possible ill-conditioning of H-so as to mitigate possible early ill-conditioning of M (Q) .…”
Section: A Regularized Constraint-reduced Mpc Algorithmmentioning
confidence: 99%
See 3 more Smart Citations