2021
DOI: 10.48550/arxiv.2106.02958
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Zeroth-Order Algorithms for Stochastic Distributed Nonconvex Optimization

Abstract: In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being distributed over n agents having access only to zeroth-order (ZO) information of the cost. This problem has various machine learning applications. As a solution, we propose two distributed ZO algorithms, in which at each iteration each agent samples the local stochastic ZO oracle at two points with an adaptive smoothing parameter. We show that the proposed algorithms achieve the linear speedup conver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 39 publications
(128 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?