Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/462
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems

Abstract: The proximal gradient algorithm has been popularly used for convex optimization. Recently, it has also been extended for nonconvex problems, and the current state-of-the-art is the nonmonotone accelerated proximal gradient algorithm. However, it typically requires two exact proximal steps in each iteration, and can be inefficient when the proximal step is expensive. In this paper, we propose an efficient proximal gradient algorithm that requires only one inexact (and thus less expensive) proximal step in each … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
39
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 52 publications
(42 citation statements)
references
References 15 publications
3
39
0
Order By: Relevance
“…Our results on the Movielens dataset are on-par with some of the recent literature [28]. Their authors use a completely different setting (a non-convex matrix factorization approach) to reach a RMSE of 0.785 on the same dataset.…”
Section: Recommendation Resultssupporting
confidence: 72%
“…Our results on the Movielens dataset are on-par with some of the recent literature [28]. Their authors use a completely different setting (a non-convex matrix factorization approach) to reach a RMSE of 0.785 on the same dataset.…”
Section: Recommendation Resultssupporting
confidence: 72%
“…The convergence studies for AAPCD, through a novel perspective, characterize the stepsize based on the momentum parameter. This fills the void in previous analyses such as (Li and Lin 2015;Yao et al 2017), where the effect of the exact value of the momentum parameter on the acceleration of convergence were not observed. As the stability of the algorithm is highly affected by asynchronism, by allowing negative momentum for high staleness values we will show the reduction in the objective function will be increased significantly and accelerates convergence.…”
Section: Introductionsupporting
confidence: 67%
“…However, such results required the computation of exact proximal steps (22) in the PGD iterations, and do not apply to CPGD which leverages the inexact proximal step (33). Convergence of PGD in non convex setups with inexact proximal steps was studied in [26], [39]. The results established in both papers require the proximal step approximation errors incurred at each iteration to be decreasing and summable, which may not necessarily be the case for the MAP approximation (32).…”
Section: Local Fixed-point Convergence Of Cpgdmentioning
confidence: 99%