2018
DOI: 10.1007/s10107-018-1300-6
|View full text |Cite
|
Sign up to set email alerts
|

On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
42
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 41 publications
(44 citation statements)
references
References 48 publications
2
42
0
Order By: Relevance
“…While the global convergence of Algorihm 1 follows from [36,29] directly, the conditions required in [36,29] to guarantee the local linear convergence of both {x k } and {(y k , z k )} may no longer hold for the SGLasso problem due to the non-polyhedral property of the 2 norm function. Fortunately, the new results established in [8] on the convergence rates of the ALM allow us to establish the following theorem, which proves the global Q-linear convergence of the primal sequence {x k } and the global R-linear convergence of the dual infeasibility and the dual objective values. Furthermore, the linear rates can be arbitrarily fast if the penalty parameter σ k is chosen sufficiently large.…”
Section: Convergence Rates For Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…While the global convergence of Algorihm 1 follows from [36,29] directly, the conditions required in [36,29] to guarantee the local linear convergence of both {x k } and {(y k , z k )} may no longer hold for the SGLasso problem due to the non-polyhedral property of the 2 norm function. Fortunately, the new results established in [8] on the convergence rates of the ALM allow us to establish the following theorem, which proves the global Q-linear convergence of the primal sequence {x k } and the global R-linear convergence of the dual infeasibility and the dual objective values. Furthermore, the linear rates can be arbitrarily fast if the penalty parameter σ k is chosen sufficiently large.…”
Section: Convergence Rates For Algorithmmentioning
confidence: 99%
“…As a result, the asymptotic superlinear convergence of both the primal and dual iterative sequences generated by the ALM are no longer guaranteed to hold by the existing theoretical results. Fortunately, by leveraging on the recent advances made in Cui, Sun, and Toh [8] on the analysis of the asymptotic R-superlinear convergence of the ALM for convex composite conic programming, we are able to establish the global linear convergence (with an arbitrary rate) of the primal iterative sequence, the dual infeasibility, and the dual function values generated by the ALM for the SGLasso problem. With this convergence result, we could expect the ALM to be highly efficient for solving the SGLasso problem.…”
Section: Introductionmentioning
confidence: 99%
“…In 1984, Luque relaxed the upper Lipschitz continuity of the dual solution mapping used in [47], which required the uniqueness of the optimal solution, by an error bound type condition [34, (2.1)] that is known to be satisfied for polyhedron [42] but difficult to be verified for non-polyhedron. In 2019, Cui et al [10] established the asymptotic R-superlinear convergence of the KKT residuals and asymptotic Q-superlinear convergence of the dual sequence generated by the ALM for solving convex NLSDP, under a quadratic growth condition on the dual problem that neither local solution nor the multiplier is required to be unique. Their remarkable work improved [47] in giving a practical stopping criterion for ALM subproblem under the Robinson constraint qualification (RCQ) (for the improvement of implementable stopping criteria, see also [18]) and obtaining the convergence of the KKT residuals with the application of KKT residual information.…”
Section: Introductionmentioning
confidence: 99%
“…Their success relies on the validity of upper Lipschitz continuous of KKT solution mapping when SOSC is satisfied, see [16,26,30,37,22]. However, this does not hold for non-polyhedral case as mentioned in [10] by using [5,Example 5.54]. For comprehensive surveys about the augmented Lagrangian method for nonlinear programming, please see [4,20,49].…”
Section: Introductionmentioning
confidence: 99%
“…(1. 5) It has been pointed out in many works that understanding the Lipschitzian behavior of T −1 L at the origin is crucial to the study of the local convergence results for algorithms in the PPA framework; see [8,14,22,23]. For instance, [22] showed the metric subregularity defined in [11] of T L which is closely related to (1.5) under the so-called second order sufficient condition.…”
mentioning
confidence: 99%