2020
DOI: 10.48550/arxiv.2003.12685
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distributionally Robust Chance-Constrained Programs with Right-Hand Side Uncertainty under Wasserstein Ambiguity

Abstract: We consider exact deterministic mixed-integer programming (MIP) reformulations of distributionally robust chance-constrained programs (DR-CCP) with random right-hand sides over Wasserstein ambiguity sets. The existing MIP formulations are known to have weak continuous relaxation bounds, and, consequently, for hard instances with small radius, or with a large number of scenarios, the branch-and-bound based solution processes suffer from large optimality gaps even after hours of computation time. This significan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 38 publications
0
9
0
Order By: Relevance
“…We proceed with a contradiction argument to show (16). Recall the assertion that ( 16) holds P ∞ -almost surely.…”
Section: Distributionally Robust Risk-constrained Programs and Their ...mentioning
confidence: 95%
See 3 more Smart Citations
“…We proceed with a contradiction argument to show (16). Recall the assertion that ( 16) holds P ∞ -almost surely.…”
Section: Distributionally Robust Risk-constrained Programs and Their ...mentioning
confidence: 95%
“…that has finite measure under the distribution P ∞ and each element of H violates the limit (16). Here, Σ is some uncountable index set.…”
Section: Distributionally Robust Risk-constrained Programs and Their ...mentioning
confidence: 99%
See 2 more Smart Citations
“…Our formulation postulates that the future environment-characterized by a joint distribution on the context and all the rewards when taking different actions-is in a Kullback-Leibler neighborhood around the training environment's distribution, thereby allowing for learning a robust policy from training data that is not sensitive to the future environment being the same as the past. Despite the fact that there has been a growing literature (see, e.g, [9,19,31,56,6,26,47,21,61,57,39,14,59,64,41,48,66,46,69,1,68,59,27,14,28,10,23,22,30]) on distributionally robust optimization (DRO)-one that shares the same philosophical underpinning on distributionally robustness as ours-the existing DRO literature has mostly focused on the statistical learning aspects, including supervised learning and feature selection type problems, rather than the decision making aspects. To the best of our knowledge, we provide the first distributionally robust formulation for policy evaluation and learning under bandit feedback, in a general, non-parametric space.…”
Section: Our Contributions and Related Workmentioning
confidence: 99%