52nd IEEE Conference on Decision and Control 2013
DOI: 10.1109/cdc.2013.6760448
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous distributed optimization using a randomized alternating direction method of multipliers

Abstract: Consider a set of networked agents endowed with private cost functions and seeking to find a consensus on the minimizer of the aggregate cost. A new class of random asynchronous distributed optimization methods is introduced. The methods generalize the standard Alternating Direction Method of Multipliers (ADMM) to an asynchronous setting where isolated components of the network are activated in an uncoordinated fashion. The algorithms rely on the introduction of randomized Gauss-Seidel iterations of a Douglas-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
166
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 151 publications
(168 citation statements)
references
References 22 publications
(42 reference statements)
1
166
1
Order By: Relevance
“…However, in the approach of [4], [5], [6] the convergence conditions are identical to the ones of the brute method, the one without coordinate descent. These conditions involve the global Lipschitz constant of the gradient the differentiable term instead than its coordinate-wise Lipschitz constants.…”
Section: Introductionmentioning
confidence: 98%
See 4 more Smart Citations
“…However, in the approach of [4], [5], [6] the convergence conditions are identical to the ones of the brute method, the one without coordinate descent. These conditions involve the global Lipschitz constant of the gradient the differentiable term instead than its coordinate-wise Lipschitz constants.…”
Section: Introductionmentioning
confidence: 98%
“…The key idea behind the convergence proof of [4] is to establish the so-called stochastic Fejér monotonicity of the sequence of iterates as noted by [5]. In a more general setting than [4], Combettes et al in [5] and Bianchi et al [6] extend the proof to the so-called α-averaged operators, which include FNE operators as a special case. This generalization allows to apply the coordinate descent principle to a broader class of primal-dual algorithms which is no longer restricted to the ADMM or the Douglas Rachford algorithm.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations