2015
DOI: 10.1109/tsp.2015.2415759
|View full text |Cite
|
Sign up to set email alerts
|

Stability and Performance Limits of Adaptive Primal-Dual Networks

Abstract: Abstract-This work studies distributed primal-dual strategies for adaptation and learning over networks from streaming data. Two first-order methods are considered based on the ArrowHurwicz (AH) and augmented Lagrangian (AL) techniques. Several revealing results are discovered in relation to the performance and stability of these strategies when employed over adaptive networks. The conclusions establish that the advantages that these methods exhibit for deterministic optimization problems do not necessarily ca… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
38
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 46 publications
(44 citation statements)
references
References 56 publications
(101 reference statements)
5
38
1
Order By: Relevance
“…Several useful strategies have been proposed to solve constrained and unconstrained versions of this problem in a fully decentralized manner [1]- [13]. Diffusion strategies [3], [8]- [12] are attractive since they are scalable, robust, and enable continuous learning and adaptation in response to drifts in the location of the minimizer due to changes in the costs or in the constraints.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Several useful strategies have been proposed to solve constrained and unconstrained versions of this problem in a fully decentralized manner [1]- [13]. Diffusion strategies [3], [8]- [12] are attractive since they are scalable, robust, and enable continuous learning and adaptation in response to drifts in the location of the minimizer due to changes in the costs or in the constraints.…”
Section: Introductionmentioning
confidence: 99%
“…The technique relies on combining diffusion adaptation with a stochastic gradient projection step, and on the use of constant step-sizes to enable continuous adaptation and learning from streaming data. Since we are learning from streaming data, the dual function cannot be computed exactly and the use of primal-dual methods may result in stability problems as already shown in [12]. For this reason, we focus on primal techniques.…”
Section: Introductionmentioning
confidence: 99%
“…The combination matrices A are chosen using the Metropolis rule (66). We see that the coupled diffusion algorithm outperforms its ADMM counterpart even though the ADMM uses global information in step (116), which indicates that primal-dual methods do not necessarily perform well under adaptive networks as shown in [47]. We also notice that the smaller the step-size is, the smaller is the steady-state MSD, and, moreover, the closer the coupled diffusion becomes to the centralized.…”
Section: A Unconstrained Casementioning
confidence: 93%
“…III. ASYNCHRONOUS SADDLE POINT METHOD Methods based upon distributed gradient descent and penalty methods more generally [28]- [30] are inapplicable to settings with nonlinear constraints, with the exception of [31], which requires attenuating learning rates to attain constraint satisfaction. On the other hand, the dual methods proposed in [32]- [34] require a nonlinear minimization computation at each algorithm iteration, and thus is impractically costly.…”
Section: Remarkmentioning
confidence: 99%