2020
DOI: 10.1016/j.automatica.2020.108962
|View full text |Cite
|
Sign up to set email alerts
|

Tracking-ADMM for distributed constraint-coupled optimization

Abstract: We consider constraint-coupled optimization problems in which agents of a network aim to cooperatively minimize the sum of local objective functions subject to individual constraints and a common linear coupling constraint. We propose a novel optimization algorithm that embeds a dynamic average consensus protocol in the parallel Alternating Direction Method of Multipliers (ADMM) to design a fully distributed scheme for the considered set-up. The dynamic average mechanism allows agents to track the time-varying… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
75
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 107 publications
(77 citation statements)
references
References 47 publications
1
75
0
1
Order By: Relevance
“…The following theorem shows the asymptotically efficient upper bound of estimation residuals provided by algorithm (8).…”
Section: Analysis Of the Estimation Accuracymentioning
confidence: 99%
See 1 more Smart Citation
“…The following theorem shows the asymptotically efficient upper bound of estimation residuals provided by algorithm (8).…”
Section: Analysis Of the Estimation Accuracymentioning
confidence: 99%
“…To date, there exist a number of approaches for the case when functions F i (x) are convex. In particular, the Alternating Direction Method of Multipliers [7], [8], as well as the subgradient method [9], [10] were proposed. For non-convex tasks, the works [11], [12] develop a large class of distributed algorithms based on various "functionalsurrogate units".…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, following the algorithm in [14], at iteration t, the algorithm consists of two steps: the local step in which operator n computes a minimizer of the following optimization problem…”
Section: B Interference Level Sharingmentioning
confidence: 99%
“…where Q ∈ R L×M is a parameter of ADMM that gradually enforces the equality constraints (see [14]), ⋅ 2 denotes 2 norm, c > 0 is a constant penalty parameter, and the superscript t determines the iteration number. Also, the local step for the slack variable is performed as:…”
Section: B Interference Level Sharingmentioning
confidence: 99%
“…The extension to general convex functions is performed in [16] by adopting the nonnegative surplus method, at the expense of a slower convergence rate. The ADMM-based algorithms are developed in [17], [18], and algorithms that aim to handle communication delay in time-varying networks and perform event-triggered updates are studied in [19] and [20], respectively. We note that all the above-mentioned works [1], [15]- [20] do not provide explicit convergence rate for their algorithms.…”
Section: A Literature Reviewmentioning
confidence: 99%