2020
DOI: 10.48550/arxiv.2008.12139
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Two-level ADMM Algorithm for AC OPF with Global Convergence Guarantees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…Peng and Low (2014, 2016 applied ADMM to some convex relaxation of ACOPF on radial networks. The numerical success of ADMM has also been observed on nonconvex ACOPF (Chung et al 2005, 2011, Sun et al 2013, Erseghe 2014, Mhanna et al 2019 with the convergence studied under certain technical assumptions (Erseghe 2014, Sun andSun 2020).…”
Section: Literature Reviewmentioning
confidence: 92%
“…Peng and Low (2014, 2016 applied ADMM to some convex relaxation of ACOPF on radial networks. The numerical success of ADMM has also been observed on nonconvex ACOPF (Chung et al 2005, 2011, Sun et al 2013, Erseghe 2014, Mhanna et al 2019 with the convergence studied under certain technical assumptions (Erseghe 2014, Sun andSun 2020).…”
Section: Literature Reviewmentioning
confidence: 92%
“…As the cardinality of K increases, the accuracy of the demand estimated by ( 6) increases while sacrificing computation. We denote by K a collection of various K and by D ẑl (K) a demand estimated by (6) with K ∈ K.…”
Section: ) Motivating Example (Data Leakage)mentioning
confidence: 99%
“…First, we consider various K(T ) when constructing the adversarial problem (6). As T increases, theoretically, the accuracy of the demand estimated by solving (6) with K ∈ K(T ) increases. We report in Figure 4 an average demand estimation error (DEE):…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…The computational bottleneck of the master problem solution can be avoided by decomposing the network into two subnetworks and by parallelizing each subproblem solution. For example, we can use ADMM (e.g., [28]) for solving each subproblem in parallel. We will address this issue in our future work.…”
Section: Concluding Remarks and Future Workmentioning
confidence: 99%