2016
DOI: 10.1109/tsp.2016.2537271
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm andConvergence Analysis

Abstract: Aiming at solving large-scale optimization problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the optimization problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. T… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
164
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 195 publications
(166 citation statements)
references
References 29 publications
2
164
0
Order By: Relevance
“…Since all agents and parallel random walks keep individual clock, both PW-ADMM and IPW-ADMM are asynchronous algorithms. However, our proposed algorithms are different from existing work [15], [17], where only one master updates the variable z. Moreover, the updated z is only sent to the agents just active.…”
Section: B Intelligent Parallel Random Walk Admmmentioning
confidence: 94%
“…Since all agents and parallel random walks keep individual clock, both PW-ADMM and IPW-ADMM are asynchronous algorithms. However, our proposed algorithms are different from existing work [15], [17], where only one master updates the variable z. Moreover, the updated z is only sent to the agents just active.…”
Section: B Intelligent Parallel Random Walk Admmmentioning
confidence: 94%
“…We consider the convergence of the proposed approach under the asynchronous protocol where the master has the freedom to make updates with gradients from only a partial set of worker machines. We start with introducing important conditions that are used commonly in previous work [13], [20], [40].…”
Section: Algorithmmentioning
confidence: 99%
“…Therefore, the speed of distributed ADMM is limited by the slowest worker nodes especially when the nodes have different computation and communication delays. This will create a new bottleneck that the conventional distributed ADMM suffers from low convergence speed and expensive time cost in practice [2], [23]. After that, local variables in the same group are used to generate group variable w Gj in (iii).…”
Section: B Distributed Admm Framework For Linear Classificationmentioning
confidence: 99%
“…Also, according to the different communication protocols, the distributed ADMM can be summarized into two classes: 1) asynchronous, where some nodes of the network are allowed to wake up at random and perform local updates in a noncoordinated fashion [23], [24] and 2) synchronous, where the master node is triggered only if it receives the information from all the slave nodes in each iteration [25]. In this paper, we only focus on the synchronous distributed ADMM under the master-slave mode.…”
mentioning
confidence: 99%
See 1 more Smart Citation