2020
DOI: 10.1109/lcomm.2019.2955442
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized Consensus Optimization Based on Parallel Random Walk

Abstract: The alternating direction method of multipliers (ADMM) has recently been recognized as a promising approach for large-scale machine learning models. However, very few results study ADMM from the aspect of communication costs, especially jointly with running time. In this letter, we investigate the communication efficiency and running time of ADMM in solving the consensus optimization problem over decentralized networks. We first review the effort of random walk ADMM (W-ADMM), which reduces communication costs … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 21 publications
(24 citation statements)
references
References 19 publications
0
24
0
Order By: Relevance
“…θ b , ∀b, can be reformulated as t = b ∈B A b θ b = 0 where A is deduced from: if (b, l) = E e , ∀b, l, the (e, b)-th block and the (e, l)-th block of A are respectively I N R and −I N R ; otherwise the corresponding blocks are zero matrices 0 N R . To effectively solve problem (P2), we then utilize the ADMM method in [13], [14]. The augmented Lagrangian for problem (P2) is…”
Section: Decentralized Beamforming Designmentioning
confidence: 99%
See 1 more Smart Citation
“…θ b , ∀b, can be reformulated as t = b ∈B A b θ b = 0 where A is deduced from: if (b, l) = E e , ∀b, l, the (e, b)-th block and the (e, l)-th block of A are respectively I N R and −I N R ; otherwise the corresponding blocks are zero matrices 0 N R . To effectively solve problem (P2), we then utilize the ADMM method in [13], [14]. The augmented Lagrangian for problem (P2) is…”
Section: Decentralized Beamforming Designmentioning
confidence: 99%
“…Since the incremental update method for decentralized optimization is more communication-efficient than the full CSI exchange method [14], we thus utilize this method to solve problem (P2). Then, variables at BS b := In what follows, we focus on solving problems (7a)-(7d) and the iteration index is dropped to simplify notation.…”
Section: Decentralized Beamforming Designmentioning
confidence: 99%
“…Furthermore, an incremental learning method has also been recognized as a promising approach to reduce communication costs, which activates one agent and one link at each iteration in a cyclic or a random order whilst keeping all other agents and links idle. Randomwalk ADMM (WADMM) [5], parallel random walk ADMM (PW-ADMM) [6], and walk proximal gradient (WPG) [7] are commonly used incremental methods. Specifically, the sequence of the updating order for agents is randomized by following a Markov chain in WADMM, while PW-ADMM allows multiple random walks in parallel.…”
mentioning
confidence: 99%
“…In the existing literature, e.g., [11]- [18], a large number of decentralized algorithms have been investigated to solve the consensus problem (1). Typically, the algorithms can mainly be classified into primal and primal-dual types, namely, gradient descent based (GD) methods and the alternating direction method of multipliers (ADMM) based methods, respectively.…”
mentioning
confidence: 99%
“…Typically, the algorithms can mainly be classified into primal and primal-dual types, namely, gradient descent based (GD) methods and the alternating direction method of multipliers (ADMM) based methods, respectively. In this work, we will use ADMM as the optimizer, which can usually achieve more accurate consensus performance than GD based methods with constant step size [18].…”
mentioning
confidence: 99%