2017
DOI: 10.1109/tac.2017.2658438
|View full text |Cite
|
Sign up to set email alerts
|

On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
48
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 49 publications
(50 citation statements)
references
References 48 publications
2
48
0
Order By: Relevance
“…In the relaxed optimization problem, there only exist linear constraints, however, the recursive feasibility of the solution can't be guaranteed. In the second step, the ADAL method [1] is applied to solve the relaxed optimization problem in a decentralized manner. Specifically, each individual zone can determine its local decision variables by solving a smallscale subproblem in parallel at each iteration.…”
Section: Hvac Energy Cost Optimization For a Multi-zonementioning
confidence: 99%
See 3 more Smart Citations
“…In the relaxed optimization problem, there only exist linear constraints, however, the recursive feasibility of the solution can't be guaranteed. In the second step, the ADAL method [1] is applied to solve the relaxed optimization problem in a decentralized manner. Specifically, each individual zone can determine its local decision variables by solving a smallscale subproblem in parallel at each iteration.…”
Section: Hvac Energy Cost Optimization For a Multi-zonementioning
confidence: 99%
“…However, the recursive feasibility of the solution cannot be guaranteed. In the second step, an ADAL method [1] is applied to solve the nonconvex relaxed optimization problem in a decentralized manner. Last but very important, the third step is focused on recovering the recursive feasibility of the solution for (P1).…”
Section: Decentralized Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…A few early examples of (non-stochastic or deterministic) distributed non-convex optimization algorithms include the Distributed Approximate Dual Subgradient (DADS) Algorithm [8], NonconvEx primal-dual SpliTTing (NESTT) algorithm [9], and the Proximal Primal-Dual Algorithm (Prox-PDA) [10]. More recently, a non-convex version of the accelerated distributed augmented Lagrangians (ADAL) algorithm is presented in [11] and successive convex approximation (SCA)-based algorithms such as iNner cOnVex Approximation (NOVA) and in-Network succEssive conveX approximaTion algorithm (NEXT) are given in [12] and [13], respectively. References [14]- [16] provide several distributed alternating direction method of multipliers (ADMM) based non-convex optimization algorithms.…”
Section: Taoyang@untedumentioning
confidence: 99%