2022
DOI: 10.48550/arxiv.2202.02580
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Communication Efficient Federated Learning via Ordered ADMM in a Fully Decentralized Setting

Abstract: The challenge of communication-efficient distributed optimization has attracted attention in recent years. In this paper, a communication efficient algorithm, called orderingbased alternating direction method of multipliers (OADMM) is devised in a general fully decentralized network setting where a worker can only exchange messages with neighbors. Compared to the classical ADMM, a key feature of OADMM is that transmissions are ordered among workers at each iteration such that a worker with the most informative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 14 publications
(20 reference statements)
0
2
0
Order By: Relevance
“…The primitive idea of distributed machine learning (DML) is to parallelize the computing operation across multiple local devices (aka workers and nodes) to solve the following distributed optimization problem where i presents model parameters vector; L i ( i ) is the local objective function for workeri ( i ∈ 1, 2, ⋯ , N ). Distributed optimization algorithms are currently one of the most popular research directions, with a specific focus on approaches that try to optimize a performance criterion employing available data stored at local devices [1]. The ADMM combines the decomposability of dual ascent method with good convergence of Lagrange multiplier method.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The primitive idea of distributed machine learning (DML) is to parallelize the computing operation across multiple local devices (aka workers and nodes) to solve the following distributed optimization problem where i presents model parameters vector; L i ( i ) is the local objective function for workeri ( i ∈ 1, 2, ⋯ , N ). Distributed optimization algorithms are currently one of the most popular research directions, with a specific focus on approaches that try to optimize a performance criterion employing available data stored at local devices [1]. The ADMM combines the decomposability of dual ascent method with good convergence of Lagrange multiplier method.…”
Section: Introductionmentioning
confidence: 99%
“…The ADMM combines the decomposability of dual ascent method with good convergence of Lagrange multiplier method. It may be used to solve problem (1) and has a wide range of applications. Wang et al [2] propose an ADMM-based DML architecture that preserves privacy.…”
Section: Introductionmentioning
confidence: 99%