2019
DOI: 10.1007/s10589-019-00073-1
|View full text |Cite
|
Sign up to set email alerts
|

General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 42 publications
(27 citation statements)
references
References 41 publications
0
27
0
Order By: Relevance
“…In this paper, we study how to construct a general merit function D α γ (y, z, x) for step-size α and proximal parameter γ . Note that compared with the forward-backward splitting method [28,29], the DR and PR splitting methods include two implicit proximal steps, which make the DR and PR methods usually more effective when the proximal mappings are easy to evaluate [18].…”
Section: Model and Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we study how to construct a general merit function D α γ (y, z, x) for step-size α and proximal parameter γ . Note that compared with the forward-backward splitting method [28,29], the DR and PR splitting methods include two implicit proximal steps, which make the DR and PR methods usually more effective when the proximal mappings are easy to evaluate [18].…”
Section: Model and Motivationmentioning
confidence: 99%
“…The assumption of KŁ functions is not a restricted one since there are huge classes of functions, for example the semi-algebraic functions, satisfy this property, see, e.g., [28,35,36]. Actually, KŁ property has been successfully and widely utilized to analyze convergence and convergence rate for many approaches such as proximalbased methods [28,29,37] and alternating minimization methods [35,36,38,39].…”
Section: Model and Motivationmentioning
confidence: 99%
“…Therefore, they are more flexible comparing those with one inertial extrapolation while not significantly increasing the computational cost per iteration. The numerical examples in Wu and Li (2019) illustrate the computational advantage of the methods with two inertial extrapolations.…”
Section: Introductionmentioning
confidence: 99%
“…It is noteworthy that all these inertial type projection methods have a common characteristic that they only have an inertial extrapolation in each iteration. Very recently, the iterative algorithms with two inertial extrapolations were proposed, such as the general inertial proximal gradient method (Wu and Li 2019) and the general inertial Krasnosel'skiǐ-Mann iteration (Dong et al 2018a). These algorithms reduce to the methods with one inertial extrapolation by setting inertial parameters in one inertial extrapolation to be zero.…”
Section: Introductionmentioning
confidence: 99%
“…In [8], it has been shown that FISTA possesses O(1/k 2 ) convergence rate for the convex case which is faster than the original proximal gradient algorithm, where k counts the iteration number. For the nonconvex case, there are also some works considering the proximal gradient method with or without acceleration, see, e.g., [9][10][11][12][13][14][15]. In [13], Wen et al proved the linear convergence of proximal gradient algorithm with extrapolation for nonconvex optimization problem (1), based on the error bound condition, while the works [9-12, 14, 15] studied the proximal gradient algorithm or its variants under the Kurdyka-Łojasiewicz (KL) framework for the nonconvex case, in which they usually require some potential functions satisfying the KL property (see Definition 2.1).…”
Section: Introductionmentioning
confidence: 99%