2017
DOI: 10.1007/s10589-017-9954-1
|View full text |Cite
|
Sign up to set email alerts
|

A proximal difference-of-convex algorithm with extrapolation

Abstract: We consider a class of difference-of-convex (DC) optimization problems whose objective is levelbounded and is the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous concave function. While this kind of problems can be solved by the classical difference-of-convex algorithm (DCA) [26], the difficulty of the subproblems of this algorithm depends heavily on the choice of DC decomposition. Simpler subproblems can be obtained by using a specific DC decomposition… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
160
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 142 publications
(162 citation statements)
references
References 39 publications
2
160
0
Order By: Relevance
“…By assuming this new potential function is a KL function and F is level-bounded, we show that the whole sequence generated by pDCA e is convergent. We then study a relationship between the KL assumption used in this paper and the one used in [38]. Specifically, under a suitable smoothness assumption on P 2 , we show that if the potential function used in [38] has a KL exponent of 1 2 , so does our new potential function.…”
Section: Introductionmentioning
confidence: 91%
See 3 more Smart Citations
“…By assuming this new potential function is a KL function and F is level-bounded, we show that the whole sequence generated by pDCA e is convergent. We then study a relationship between the KL assumption used in this paper and the one used in [38]. Specifically, under a suitable smoothness assumption on P 2 , we show that if the potential function used in [38] has a KL exponent of 1 2 , so does our new potential function.…”
Section: Introductionmentioning
confidence: 91%
“…More importantly, as we shall see later in Section 5, the objectives of models for simultaneous sparse recovery and outlier detection can be written as DC functions whose concave parts are typically nonsmooth. Thus, for these problems, the analysis in [38] cannot be applied to studying global sequential convergence nor local convergence rate of the sequence generated by pDCA e .…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…To address these nonconvex regularization problems, many iterative algorithms are investigated by researchers, such as the DCA [45][46][47][48] (or Convex-ConCave Procedure (CCCP) [49], or the Multi-Stage (MS) convex relaxation [22]), and its accelerate versions: Boosted Difference of Convex function Algorithms (BDCA) [50] and proximal Difference-of-Convex Algorithm with extrapolation (pDCAe) [51], the alternating direction method of multipliers (ADMM) [52], split Bregman iteration (SBI) [53], General Iterative Shrinkage and Thresholding (GIST) [54], nonmonotone accelerated proximal gradient (nmAPG) [55], which is an extension of the APG [56].…”
Section: A Backgroundmentioning
confidence: 99%