2020 28th European Signal Processing Conference (EUSIPCO) 2021
DOI: 10.23919/eusipco47968.2020.9287441
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Learning with Non-Smooth Objective Functions

Abstract: We develop a new distributed algorithm to solve a learning problem with non-smooth objective functions when data are distributed over a multi-agent network. We employ a zerothorder method to minimize the associated augmented Lagrangian in the primal domain using the alternating direction method of multipliers (ADMM) to develop the proposed algorithm, named distributed zeroth-order based ADMM (D-ZOA). Unlike most existing algorithms for non-smooth optimization, which rely on calculating subgradients or proximal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…where α 0 is an appropriate initial step-size and R is an upper bound on the distance between a minimizer w * to (7) and the first iterate w (1) as per [25].…”
Section: Zeroth-order Methodsmentioning
confidence: 99%
“…where α 0 is an appropriate initial step-size and R is an upper bound on the distance between a minimizer w * to (7) and the first iterate w (1) as per [25].…”
Section: Zeroth-order Methodsmentioning
confidence: 99%
“…Since the objective function in ( 4) is assumed to be nonsmooth, the corresponding minimization problem cannot be solved using any first-order method. To overcome this, we use a zeroth-order method as in [1]. We utilize the twopoint stochastic-gradient algorithm that has been proposed in [29] for optimizing general non-smooth functions.…”
Section: B Zeroth-order-based Distributed Admm Algorithmmentioning
confidence: 99%
“…We utilize the twopoint stochastic-gradient algorithm that has been proposed in [29] for optimizing general non-smooth functions. More specifically, we use the stochastic mirror descent method with the proximal function 1 2 ∥•∥ and the gradient estimator at point β k given by…”
Section: B Zeroth-order-based Distributed Admm Algorithmmentioning
confidence: 99%