2014
DOI: 10.1007/s10107-014-0808-7
|View full text |Cite
|
Sign up to set email alerts
|

An augmented Lagrangian method for distributed optimization

Abstract: We propose a novel distributed method for convex optimization problems with a certain separability structure. The method is based on the augmented Lagrangian framework. We analyze its convergence and provide an application to two network models, as well as to a two-stage stochastic optimization problem. The proposed method compares favorably to two augmented Lagrangian decomposition methods known in the literature, as well as to decomposition methods based on the ordinary Lagrangian function.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
98
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 125 publications
(98 citation statements)
references
References 23 publications
0
98
0
Order By: Relevance
“…By doing so, we can use the algorithm in [47] to solve the problem. Note that such an algorithm is a pure optimization algorithm that relies on the strong assumption that EUs' truthfully report their private information to achieve the social optimum.…”
Section: Distributed Pure Optimization (Dpo) Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…By doing so, we can use the algorithm in [47] to solve the problem. Note that such an algorithm is a pure optimization algorithm that relies on the strong assumption that EUs' truthfully report their private information to achieve the social optimum.…”
Section: Distributed Pure Optimization (Dpo) Algorithmmentioning
confidence: 99%
“…To show that Algorithm 2 is an ADAL-based algorithm, we consider the augmented Lagrangian function of the R-SWM-M Problem with the following decomposable structures [47]:…”
Section: Lagrangian and Augmented Lagrangianmentioning
confidence: 99%
See 1 more Smart Citation
“…As such, distributed algorithms avoid the cost and fragility associated with centralized coordination, and provide better privacy for the autonomous decision makers. Popular distributed optimization methods in the literature include distributed subgradient methods [4], [5], dual averaging methods [6], and augmented Lagrangian methods [7]- [10]. Yan The above distributed optimization methods usually assume a static objective function.…”
Section: Introductionmentioning
confidence: 99%
“…For such f , (1.2) is often true, for instance because U k is compact and finite bounds −∞ < u k ≤ u k ≤ū k < ∞ are known for each u k ∈ U k (very often, U k ⊆ {0, 1} n k ). Minimizing f solves the Lagrangian dual of (1.3), which has countless applications; e.g., [4,5,12,16,17,24,27] among the many others. Typically (1.3) is "difficult", due to either being largescale, N P-hard, or both.…”
Section: Introductionmentioning
confidence: 99%