2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6638534
|View full text |Cite
|
Sign up to set email alerts
|

Linearly convergent decentralized consensus optimization with the alternating direction method of multipliers

Abstract: In the decentralized consensus optimization problem, a net work of agents minimizes the summation of their local ob jective functions on a common set of variables, allowing only information exchange among neighbors. The alternating di rection method of multipliers (ADMM) has been shown to be a powerful tool for solving the problem with empirically fast convergence. This paper establishes the linear convergence rate of the ADMM in decentralized consensus optimization. The theoretical convergence rate is a funct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
19
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 14 publications
0
19
0
Order By: Relevance
“…Besides the ADMM, existing decentralized approaches for solving (1) include belief propagation [7], incremental optimization [22], subgradient descent [15]- [17], dual averaging [18], [19], etc. Belief propagation and incremental optimization require one to predefine a tree or loop structure in the network, whereas the advantage of the ADMM, subgradient descent, and dual averaging is that they do not rely on any predefined structures.…”
Section: B Related Workmentioning
confidence: 99%
“…Besides the ADMM, existing decentralized approaches for solving (1) include belief propagation [7], incremental optimization [22], subgradient descent [15]- [17], dual averaging [18], [19], etc. Belief propagation and incremental optimization require one to predefine a tree or loop structure in the network, whereas the advantage of the ADMM, subgradient descent, and dual averaging is that they do not rely on any predefined structures.…”
Section: B Related Workmentioning
confidence: 99%
“…The next proposition addresses another extreme scenario when the ADMM iterates are operating on the active set of the quadratic program (18).…”
Section: Special Cases Of Quadratic Programmingmentioning
confidence: 99%
“…A few recent papers have focused on the optimal parameter selection of ADMM algorithm for some variations of distributed convex programming subject to linear equality constraints e.g. [17], [18].The paper is organized as follows. In Section II, we derive some preliminary results on fixed-point iterations and review the necessary background on the ADMM method.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, references [10], [11] analyze the distributed ADMM method therein when the costs are strongly convex and have Lipschitz continuous gradients. The method in [10], [11] corresponds to our deterministic Jacobi variant when τ = 1. With respect to our results, the bounds in [10], [11] are tighter than ours for the method they study.…”
Section: Related Workmentioning
confidence: 99%