2015
DOI: 10.1137/140998135
|View full text |Cite
|
Sign up to set email alerts
|

Global Convergence of Splitting Methods for Nonconvex Composite Optimization

Abstract: We consider the problem of minimizing the sum of a smooth function h with a bounded Hessian, and a nonsmooth function. We assume that the latter function is a composition of a proper closed function P and a surjective linear map M, with the proximal mappings of τ P , τ > 0, simple to compute. This problem is nonconvex in general and encompasses many important applications in engineering and machine learning. In this paper, we examined two types of splitting methods for solving this nonconvex optimization probl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
321
1
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 352 publications
(325 citation statements)
references
References 32 publications
2
321
1
1
Order By: Relevance
“…In practice, ADMM works well for many non-convex problems as well [Wen et al 2012;Chartrand 2012;Chartrand and Wohlberg 2013;Miksik et al 2014;Lai and Osher 2014;Liavas and Sidiropoulos 2015], but it is more challenging to establish its convergence for general non-convex problems. Only very recently have such convergence proofs been given under strong assumptions [Li and Pong 2015;Magnússon et al 2016;Wang et al 2019]. We provide in this paper a general proof of convergence for non-convex problems under weaker assumptions.…”
Section: Related Workmentioning
confidence: 91%
See 1 more Smart Citation
“…In practice, ADMM works well for many non-convex problems as well [Wen et al 2012;Chartrand 2012;Chartrand and Wohlberg 2013;Miksik et al 2014;Lai and Osher 2014;Liavas and Sidiropoulos 2015], but it is more challenging to establish its convergence for general non-convex problems. Only very recently have such convergence proofs been given under strong assumptions [Li and Pong 2015;Magnússon et al 2016;Wang et al 2019]. We provide in this paper a general proof of convergence for non-convex problems under weaker assumptions.…”
Section: Related Workmentioning
confidence: 91%
“…Although ADMM turns out to be effective for many non-convex problems in practice, its convergence for general nonconvex optimization remains an open research question. Recent convergence results such as [Li and Pong 2015;Magnússon et al 2016;Wang et al 2019] rely on strong assumptions that are not satisfied by many computer graphics problems. This paper addresses these two issues of ADMM.…”
Section: Introductionmentioning
confidence: 99%
“…For the multiblock separable convex problems, with three or more blocks of variables, it is known that the original ADMM is not necessarily convergent (Chen et al 2016). On the other hand, theoretical convergence analysis of the ADMM for nonconvex problems is rather limited, making either assumptions on the iterates of the algorithm (Xu et al 2012;Magnusson et al 2016) or dealing with special non-convex models (Li and Pong 2015;Wang et al 2014aWang et al , 2015, none of which is applicable for the proposed optimization problem (12). However, it is worth noting that the ADMM exhibits good numerical performance in non-convex problems such as nonnegative matrix factorization (Sun and Févotte 2014), tensor decomposition (Liavas and Sidiropoulos 2015), matrix separation (Shen et al 2014;Papamakarios et al 2014), matrix completion (Xu et al 2012), motion segmentation , to mention but a few.…”
Section: The Generalized Q-shrinkage Operator Utilized In Step 4 Enmentioning
confidence: 99%
“…To the best of our knowledge, all current work in the non‐convex ADMM literature requires f or g to have a Lipschitz continuous first derivative in order to guarantee the convergence. Without such assumptions, the Fejér monotonicity of the sequences generated by ADMM cannot be established and as a consequence the convergence of ADMM remains unknown; see Hong et al (), Zhong & Kwok (), and Li & Pong () for technical details. For MCP or SCAD PQR, neither the loss function ρ τ (·) nor the penalty has a Lipschitz continuous first derivatives.…”
Section: The Qr‐admm Algorithmmentioning
confidence: 99%
“…On the other hand, the convergence of ADMM for general non-convex problems still remains an open problem. Owing to the lack of convexity, the convergence of ADMM typically requires strong assumptions on the objective functions; see, for example, Hong et al (2014), Zhong & Kwok (2014), and Li & Pong (2014).…”
Section: Convergencementioning
confidence: 99%