2018
DOI: 10.1287/moor.2017.0900
|View full text |Cite
|
Sign up to set email alerts
|

Nonconvex Lagrangian-Based Optimization: Monitoring Schemes and Global Convergence

Abstract: We introduce a novel approach addressing global analysis of a difficult class of nonconvexnonsmooth optimization problems within the important framework of Lagrangian-based methods. This genuine nonlinear class captures many problems in modern disparate fields of applications. It features complex geometries, qualification conditions, and other regularity properties do not hold everywhere. To address these issues we work along several research lines to develop an original general Lagrangian methodology which ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
55
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(56 citation statements)
references
References 26 publications
1
55
0
Order By: Relevance
“…A method derived from ADMM has also been proposed for optimizing a biaffine model for training deep neural networks [53]. For general nonlinear constraints, a framework for "monitored" Lagrangian-based multiplier methods was studied in [4].…”
Section: Algorithm 1 Admmmentioning
confidence: 99%
“…A method derived from ADMM has also been proposed for optimizing a biaffine model for training deep neural networks [53]. For general nonlinear constraints, a framework for "monitored" Lagrangian-based multiplier methods was studied in [4].…”
Section: Algorithm 1 Admmmentioning
confidence: 99%
“…Splitting algorithms for solving problems of the form (1.2) have been considered in [19], under the assumption that H is twice continuously differentiable with bounded Hessian, in [25], under the assumption that one of the summands is convex and continuous on its effective domain, and in [13], as a particular case of a general nonconvex proximal ADMM algorithm. We would like to mention in this context also [10] for the case when A is nonlinear. The convergence analysis we will carry out in this paper relies on a descent inequality, which we prove for a regularization of the augmented Lagrangian L β : R mˆRqˆRpˆRp Ñ R Y t`8u L β px, y, z, uq " F pzq`G pyq`H px, yq`xu, Ax´zy`β 2 Ax´z 2 , β ą 0, associated with problem (1.1).…”
Section: Problem Formulation and Motivationmentioning
confidence: 99%
“…Note that the minimizer of this majorizer can not only be computed efficiently due to its separability, but also allows for a global view of the function and its minimizer almost coincides with the global minimum although the initial point is quite far from it. Finally, a recent preprint [15] proposes to solve composite minimization problems with a different approach, namely a nonlinear splitting variant, reformulating the problem to…”
Section: Related Workmentioning
confidence: 99%
“…Critically both ours and their approach rely on the efficient solution of a nonlinear programming task as intermediate step in the algorithm. For us, this is the nonconvex majorizer (3), the corresponding problem in [15,Eq. (6.3)] is the minimization of (8) for u:…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation