2021
DOI: 10.48550/arxiv.2112.08091
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Ensuring DNN Solution Feasibility for Optimization Problems with Convex Constraints and Its Application to DC Optimal Power Flow Problems

Abstract: Ensuring solution feasibility is a key challenge in developing Deep Neural Network (DNN) schemes for solving constrained optimization problems, due to inherent DNN prediction errors. In this paper, we propose a "preventive learning" framework to systematically guarantee DNN solution feasibility for problems with convex constraints and general objective functions. We first apply a predict-and-reconstruct design to not only guarantee equality constraints but also exploit them to reduce the number of variables to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 55 publications
0
10
0
Order By: Relevance
“…The latter approach, of applying a model-based optimizer to features extracted by a DNN as in Example 19, can also be used to enforce decisions made by a DNN to comply to some underlying physical requirements, see, e.g., (33).…”
Section: Example 18 Consider Again the Setting Ofmentioning
confidence: 99%
“…The latter approach, of applying a model-based optimizer to features extracted by a DNN as in Example 19, can also be used to enforce decisions made by a DNN to comply to some underlying physical requirements, see, e.g., (33).…”
Section: Example 18 Consider Again the Setting Ofmentioning
confidence: 99%
“…Here, (11) becomes (25) and the tuning is done via end-to-end training as in (24). The latter approach, of applying a model-based optimizer to features extracted by a DNN as in Example 19, can also be used to enforce decisions made by a DNN to comply to some underlying physical requirements, see, e.g., [34].…”
Section: Dnn-aided Optimizationmentioning
confidence: 99%
“…Regularization can greatly enhance the performance of NN models by mitigating data overfitting and improving the training speed. Simlarly for OPF learning, regularization has been introduced to e.g., improve the constraint satisfaction [9], or approach the first-order optimality [6], [10]. Nonetheless, the flow constraint in (1e) is by and large the most critical feasibility condition of ac-OPF, motivating us to develop a new ac-feasibility regularization (FR) approach.…”
Section: Ac-feasibility Regularizationmentioning
confidence: 99%
“…network-wide line limit constraints. For example, a feasible domain technique was developed in [5], [9], while KKT conditions for ac/dc-OPF were used for training regularization [6], [10]. While the first approach could affect the OPF solution optimality, the second one tends to add a large number of regularization terms, all of which require the design of weight coefficients (hyper-parameters).…”
Section: Introductionmentioning
confidence: 99%