2011
DOI: 10.1007/s10957-011-9815-5
|View full text |Cite
|
Sign up to set email alerts
|

Outer Trust-Region Method for Constrained Optimization

Abstract: Given an algorithm A for solving some mathematical problem based on the iterative solution of simpler subproblems, an Outer Trust-Region (OTR) modification of A is the result of adding a trust-region constraint to each subproblem. The trust-region size is adaptively updated according to the behavior of crucial variables. The new subproblems should not be more complex than the original ones and the convergence properties of the OTR algorithm should be the same as those of Algorithm A. In the present work, the O… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0
1

Year Published

2013
2013
2019
2019

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 9 publications
0
5
0
1
Order By: Relevance
“…The trust region method for unconstrained optimization problem has the most obvious global convergence advantage, which is different from the other conventional optimization methods such as steepest descent method, Newton method, and conjugate gradient method [36]. In this section, the similar trust region (TR) concept is introduced as…”
Section: B Variable Trust Region Methods For Adjusting Search Rangementioning
confidence: 99%
“…The trust region method for unconstrained optimization problem has the most obvious global convergence advantage, which is different from the other conventional optimization methods such as steepest descent method, Newton method, and conjugate gradient method [36]. In this section, the similar trust region (TR) concept is introduced as…”
Section: B Variable Trust Region Methods For Adjusting Search Rangementioning
confidence: 99%
“…In order to increase the chance of convergence to minimizers (or, at least, discourage convergence to other stationary points), the matrix of the system is modified in such a way that the modified Hessian of the Lagrangian ∇ 2 L(x, λ) + nw I is positive definite onto the null space of ∇h(x) T . This goal is achieved with the modification displayed in (11) of the diagonal of the coefficient matrix in (10), which corresponds to the modification of its inertia. On the other hand, when se > 0, the diagonal matrix − se I ensures that the last m rows of the coefficient matrix in (11) …”
Section: If a Limit Point Of {Xmentioning
confidence: 99%
“…The considered set of 283 problems from the CUTEst collection includes 45 problems (representing 16% of the problems) that are quadratic programming reformulations of linear complementarity problems (provided by Michael Ferris). In 11 out of this 45 problems, SECO presented a phenomenon named greediness in [12,10] that may affect penalty and Lagrangian methods when the objective function takes very low values (perhaps going to −∞) in the non-feasible region. In this case, iterates of the subproblems' solver may be attracted by undesired minimizers, especially at the first outer iterations, and overall convergence may fail to occur.…”
Section: Problems With Equality Constraints and Bound Constraintsmentioning
confidence: 99%
“…In the numerical experiments, we considered ε feas = ε opt = 10 −8 . For comparison purposes, it is worth noting that this stopping criterion is identical to the one adopted by Algencan [1,6], while it is also very similar to the one adopted by Ipopt [32]. In Ipopt, the same criterion is used for feasibility, while a relaxed criterion is used for optimality since, in the right-hand-side of (57), ε opt appears multiplied by max{s max , λ 1 /m}/s max with s max = 100.…”
Section: Stopping Criterion and Comparisonmentioning
confidence: 99%