2021
DOI: 10.1287/mnsc.2020.3678
|View full text |Cite
|
Sign up to set email alerts
|

From Data to Decisions: Distributionally Robust Optimization Is Optimal

Abstract: We study stochastic programs where the decision maker cannot observe the distribution of the exogenous uncertainties but has access to a finite set of independent samples from this distribution. In this setting, the goal is to find a procedure that transforms the data to an estimate of the expected cost function under the unknown data-generating distribution, that is, a predictor, and an optimizer of the estimated cost function that serves as a near-optimal candidate decision, that is, a prescriptor. As functi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
87
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(88 citation statements)
references
References 36 publications
0
87
1
Order By: Relevance
“…Thirdly, we can bound the expected value, over all observations and all possible distributions, of the optimal objective function of the proposed stochastic optimization problem for a finite number of observations by a constant, and accordingly the asymptotic analysis yields the convergence of this constant to zero. This stands in contrast to the finite sample size results from Van Parys et al [54], for example their Theorem 8, in that they assume the asymptotic distribution P ∞ (i.e., the sample path distribution) while our results hold for all a-priori distributions, cf. Theorem 1.…”
Section: Minimax: Distributionally Robust Stochastic Optimizationcontrasting
confidence: 88%
See 1 more Smart Citation
“…Thirdly, we can bound the expected value, over all observations and all possible distributions, of the optimal objective function of the proposed stochastic optimization problem for a finite number of observations by a constant, and accordingly the asymptotic analysis yields the convergence of this constant to zero. This stands in contrast to the finite sample size results from Van Parys et al [54], for example their Theorem 8, in that they assume the asymptotic distribution P ∞ (i.e., the sample path distribution) while our results hold for all a-priori distributions, cf. Theorem 1.…”
Section: Minimax: Distributionally Robust Stochastic Optimizationcontrasting
confidence: 88%
“…Van Parys et al [54] also deal with data-driven distribution optimization and make a special kind of asymptotic optimal decision. Similar to our work, they also provide a series of finite sample size results.…”
Section: Minimax: Distributionally Robust Stochastic Optimizationmentioning
confidence: 99%
“…Recently, Parys et al [171] gave a strong conclusion that is reflected in the article entitled "From data to decisions: distributionally robust optimization is optimal". They proved that, by solving a distributionally robust optimization problem over P φ−KL , the best data-driven decision can be obtained with guarantees of the best out-of-sample performance.…”
Section: Hu and Hongmentioning
confidence: 99%
“…Distributionally robust optimization: Probabilistic guarantees and concentration results have been the fundamental building blocks in the distributionally robust optimization (DRO) literature. In the DRO setting, reformulations of ambiguous chance constraints (3) have been derived in cases where the ambiguity set P is constructed from bounds on moments of the distribution [18,24,48,45], or is defined as a ball around the empirical distribution according to phi-divergence [46], the f-divergence [35], the Wasserstein distance [23,15,21,42,32], or the relative entropy [40]. We refer to [28] and references therein for a comprehensive review.…”
Section: Literature Reviewmentioning
confidence: 99%