2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2017
DOI: 10.1109/allerton.2017.8262757
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized exact coupled optimization

Abstract: Distributed optimization methods with local updates have recently received a lot of attention due to their potential to reduce the communication cost of distributed methods. In these algorithms, a collection of nodes performs several local updates based on their local data and then communicates with each other to exchange estimate information. While there have been many studies on distributed local methods with centralized network connections, there has been less work on decentralized networks.In this work, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
16
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(18 citation statements)
references
References 37 publications
2
16
0
Order By: Relevance
“…For sparse networks with a large number of parameters to estimate, it is much more efficient to devise distributed techniques that solve (3) directly rather than transform (3) into the form in (1) via vector extension. It is shown in the simulations in [32] and [35] that this extension technique not only increases complexity but it often degrades convergence performance as well, which we show analytically in this work.…”
Section: B Contribution and Related Workmentioning
confidence: 56%
See 1 more Smart Citation
“…For sparse networks with a large number of parameters to estimate, it is much more efficient to devise distributed techniques that solve (3) directly rather than transform (3) into the form in (1) via vector extension. It is shown in the simulations in [32] and [35] that this extension technique not only increases complexity but it often degrades convergence performance as well, which we show analytically in this work.…”
Section: B Contribution and Related Workmentioning
confidence: 56%
“…Similar clustering was used in [35] for the deterministic problem where the ADMM method was employed with identical penalty factors across all clusters. In our previous work [32], we studied the same deterministic case but developed instead a first-order method for solving (3) without constraints by relying on the exact diffusion strategy from [41], [42], which unlike ADMM does not require inner minimization steps.…”
Section: B Contribution and Related Workmentioning
confidence: 99%
“…Thus, each constraint involves the neighborhood of an agent, so that for these types of problems we can select C e = N s (for s = e) since neighborhoods are naturally connected. Now, more generally, even if some chosen subnetwork happens to be disconnected, we can always embed it into a larger connected sub-network as long as the entire network is connected -an explanation of this embedding procedure can be found in [28], [30]. We now provide one example.…”
Section: Problem Formulationmentioning
confidence: 99%
“…There have been many works in the literature studying distributed multitask adaptive strategies and their convergence behavior. Nevertheless, with few exceptions [20], most of these works focus on mean-square-error costs.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the type of prior information that may be available about how the tasks are related to each other, multitask learning algorithms can be derived by translating the prior information into constraints on the parameter vectors to be inferred [12]- [22]. For example, in [18]- [20], distributed strategies are developed under the assumption that the parameter vectors across the agents overlap partially. A more general scenario is considered in [21] where it is assumed that the tasks across the agents are locally coupled through linear equality constraints.…”
Section: Introductionmentioning
confidence: 99%