2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2018
DOI: 10.1109/allerton.2018.8636084
|View full text |Cite
|
Sign up to set email alerts
|

A Proximal Zeroth-Order Algorithm for Nonconvex Nonsmooth Problems

Abstract: In this paper, we focus on solving an important class of nonconvex optimization problems which includes many problems for example signal processing over a networked multi-agent system and distributed learning over networks. Motivated by many applications in which the local objective function is the sum of smooth but possibly nonconvex part, and non-smooth but convex part subject to a linear equality constraint, this paper proposes a proximal zeroth-order primal dual algorithm (PZO-PDA) that accounts for the in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Let's postpone the proof of Proposition and give priority to discuss how Proposition 1 indicates the convergence rate of each algorithm. Inequality (25) asserts that error norm ||e so i,k || is bounded by a linear and quadratic term of the iterate difference norm ||x i i (k + 1) − x i i (k)||, so the approximation error vanishes as the sequence {x i i (k)} converges. Notice that x i (k) converges by Algorithm 1, so after a number of iterations, the term ||x i i (k + 1) − x i i (k)|| → 0, and l 2 ||x i i (k + 1) − x i i (k)|| becomes smaller than 2M, this implies that the error norm ||e so i,k || eventually becomes proportional to the quadratic term…”
Section: Convergence Rates Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…Let's postpone the proof of Proposition and give priority to discuss how Proposition 1 indicates the convergence rate of each algorithm. Inequality (25) asserts that error norm ||e so i,k || is bounded by a linear and quadratic term of the iterate difference norm ||x i i (k + 1) − x i i (k)||, so the approximation error vanishes as the sequence {x i i (k)} converges. Notice that x i (k) converges by Algorithm 1, so after a number of iterations, the term ||x i i (k + 1) − x i i (k)|| → 0, and l 2 ||x i i (k + 1) − x i i (k)|| becomes smaller than 2M, this implies that the error norm ||e so i,k || eventually becomes proportional to the quadratic term…”
Section: Convergence Rates Comparisonmentioning
confidence: 99%
“…The operator splitting method is soon extended to deal with aggregative game with time‐varying communication graph 23 . Other works like dynamic NE seeking strategy for multiagent networked game with disturbance rejection is considered by Romano and Pavel, 7 distributed computation algorithm for ϵ‐NE is investigated by Parise et al 24 Notice that there are some works on nonconvex problems, 25,26 but we only focus on convex games in this article.…”
Section: Introductionmentioning
confidence: 99%
“…Various ZO optimization methods have been proposed, e.g., ZO (stochastic) gradient descent algorithms [20]- [31], ZO stochastic coordinate descent algorithms [32], ZO (stochastic) variance reduction algorithms [24], [25], [29], [30], [33]- [45], ZO (stochastic) proximal algorithms [33], [41], [46], [47], ZO Frank-Wolfe algorithms [24], [43], [45], [48], ZO mirror descent algorithms [18], [39], [49], ZO adaptive momentum methods [47], [50], ZO methods of multipliers [34], [35], [51], [52], ZO stochastic path-integrated differential estimator [37], [42], [52]. Convergence properties of these algorithms have been analyzed in detail.…”
Section: A Literature Reviewmentioning
confidence: 99%
“…Assumptions 3 and 4 are standard in stochastic optimization with ZO information feedback, e.g., [22], [24], [32]- [35], [39], [40], [47], [48], [60]. Assumption 5 is slightly weaker than the assumption that each ∇f i is bounded, which is normally used in the literature studying finite-sum ZO optimization, e.g., [18], [29], [30], [34]- [36], [41], [48], [51]- [53], [57], [58],…”
Section: Assumptionmentioning
confidence: 99%
See 1 more Smart Citation