2022
DOI: 10.1287/moor.2021.1203
|View full text |Cite
|
Sign up to set email alerts
|

On Optimality Conditions for Nonlinear Conic Programming

Abstract: Sequential optimality conditions play a major role in proving stronger global convergence results of numerical algorithms for nonlinear programming. Several extensions are described in conic contexts, in which many open questions have arisen. In this paper, we present new sequential optimality conditions in the context of a general nonlinear conic framework, which explains and improves several known results for specific cases, such as semidefinite programming, second-order cone programming, and nonlinear progr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 63 publications
0
10
0
Order By: Relevance
“…Definition 4.1 (Def. 4 of [4]). We say that a point x ∈ F satisfies the AKKT condition when there exist sequences {x k } k∈N → x and {Y k } k∈N ⊆ S m + , and perturbation sequences {δ k } k∈N ⊆ R n and {∆ k } k∈N ⊆ S m , such that:…”
Section: Stronger Sequential-type Constant Rank Cqs For Nsdp and Glob...mentioning
confidence: 97%
“…Definition 4.1 (Def. 4 of [4]). We say that a point x ∈ F satisfies the AKKT condition when there exist sequences {x k } k∈N → x and {Y k } k∈N ⊆ S m + , and perturbation sequences {δ k } k∈N ⊆ R n and {∆ k } k∈N ⊆ S m , such that:…”
Section: Stronger Sequential-type Constant Rank Cqs For Nsdp and Glob...mentioning
confidence: 97%
“…An augmented Lagrangian algorithm is also presented in [15] for NSDP, whose global convergence theory is built around AKKT. Such results were sharpened in [5] and further extended in [6], for the general (NCP).…”
Section: Semidefinite Programmingmentioning
confidence: 75%
“…Our approach has a heavy algorithmic taste, as our proof is based on the construction of a sequence of approximate solutions of penalized subproblems, very similarly to a sequence generated by practical algorithms. In particular, a similar first-order approach has recently led to several improvements of global convergence theory of augmented Lagrangian methods in conic contexts [4,5,6,15].…”
Section: Final Remarksmentioning
confidence: 99%
“…For any given ε ≥ 0, we say x ∈ IR n is an ε-approximate stationary point for problem (3.1) if there exists a ρ ≥ 0 such that (1). dist(F (x) ; X) ≤ ε, (2). inf w ≤1 dϕ(x)(w) + ρ dp(F (x))(dF (x)w) ≥ −ε where p(y) := dist(y ; X) and dp(y)(w) is the subderivative of p caculated in (2.10).…”
Section: Approximate Stationary Solutionsmentioning
confidence: 99%