2020
DOI: 10.48550/arxiv.2006.11029
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks

Abstract: This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example. Neural networks have the potential to substantially reduce the computing time of OPF solutions. However, the lack of guarantees for their worst-case performance remains a major barrier for their adoption in practice. This work aims to remove this barrier. We formulate mixed-integer linear programs to obtain worst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Since Algorithm 1 only relies on getting µ to be in the correct range, the active constraints would be identified correctly for all of these cases. This theorem also formalizes the empirical observation in [31], where the error of neural network-based OPF is reduced if training points are on the boundary of the feasible region. of the solutions obtained from our algorithm are optimal, while less than half of the solutions from the end-to-end model are feasible.…”
Section: Generalization For Unseen Regionsmentioning
confidence: 81%
See 1 more Smart Citation
“…Since Algorithm 1 only relies on getting µ to be in the correct range, the active constraints would be identified correctly for all of these cases. This theorem also formalizes the empirical observation in [31], where the error of neural network-based OPF is reduced if training points are on the boundary of the feasible region. of the solutions obtained from our algorithm are optimal, while less than half of the solutions from the end-to-end model are feasible.…”
Section: Generalization For Unseen Regionsmentioning
confidence: 81%
“…However, unlike LU factorization, it is hard to guarantee that the machine learning methods would always recover the right answer. The approach in [31] can bound the worst case errors for a trained neural network with fixed parameters. But these types of ex post analysis is hard to generalize and do not shed light on why a method may perform better or worse.…”
Section: Introductionmentioning
confidence: 99%
“…Conic relaxations Heuristics AI Energy Management Proposed Reference [6] to [9] [11] to [12] [15] to [16] [17] to [19] approach Convergence guarantee Global optimum Energy storage Practical oriented Implementation Stochastic model on conic approximations such as semidefinite programming and second-order cone optimization (see [7] and the references therein for a complete review of this subject). These relaxations transform the non-convex problem into a convex thereof with theoretical and practical advantages related to global optimal, uniqueness of the solution, and fast convergence rate [8].…”
Section: Aspectmentioning
confidence: 99%
“…Other methods based on artificial intelligence have also been proposed, especially reinforcement learning [14] and neural networks [15]. These types of algorithms are promising for real-time implementation, although they may be improved by theoretical analysis of the optimization problem [16].…”
Section: Aspectmentioning
confidence: 99%
See 1 more Smart Citation