2021
DOI: 10.1007/s10107-021-01724-0
|View full text |Cite
|
Sign up to set email alerts
|

Distributionally robust stochastic programs with side information based on trimmings

Abstract: We consider stochastic programs conditional on some covariate information, where the only knowledge of the possible relationship between the uncertain parameters and the covariates is reduced to a finite data sample of their joint distribution. By exploiting the close link between the notion of trimmings of a probability measure and the partial mass transportation problem, we construct a data-driven Distributionally Robust Optimization (DRO) framework to hedge the decision against the intrinsic error in the pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…On the other hand, decision-making models usually ignore the side information in the optimization framework and create an offline predictive model between the independent and dependent variables, whose outputs are to be used indirectly in the optimization framework. With the abundance of historical data and access to side information when decision-making, we envision the modeling framework of conditional stochastic optimization (Ban and Rudin [11], Bertsimas and Kallus [41], Kannan et al [217]) and their distributionally robust versions (Bertsimas et al [54], Bertsimas and Van Parys [44], Esteban-Pérez and Morales [139], Kannan et al [218], Nguyen et al [283]) receive an increasing attention in theory and practice. Model-free distributional robustness.…”
Section: Conclusion and Future Research Directionsmentioning
confidence: 99%
“…On the other hand, decision-making models usually ignore the side information in the optimization framework and create an offline predictive model between the independent and dependent variables, whose outputs are to be used indirectly in the optimization framework. With the abundance of historical data and access to side information when decision-making, we envision the modeling framework of conditional stochastic optimization (Ban and Rudin [11], Bertsimas and Kallus [41], Kannan et al [217]) and their distributionally robust versions (Bertsimas et al [54], Bertsimas and Van Parys [44], Esteban-Pérez and Morales [139], Kannan et al [218], Nguyen et al [283]) receive an increasing attention in theory and practice. Model-free distributional robustness.…”
Section: Conclusion and Future Research Directionsmentioning
confidence: 99%
“…Therefore, our formulation using the kernel functions allows to reduce the DRO problem with covariates to the standard one studied in [37] and most statistical properties still hold.…”
Section: Convergence Of the Objective Functionmentioning
confidence: 99%
“…This robustification relies on using a kernel density estimate to construct the nominal conditional distribution, and the weights of the samples are induced by U ϕ,ρ ( Ω). Hence, our scheme is applicable for the emerging stream of robustifying conditional decisions, see [13,20,27,28]. Remark 2.4 (Choice of the nominal matrix).…”
Section: A Reweighting Framework With Doubly Non-negative Matricesmentioning
confidence: 99%
“…where the inequality in (13) follows from the fact that U W,ρ ( Ω) ⊆ V W,ρ ( Ω), and the equality follows from Proposition 4.4. In the second step, we argue that the optimizer Ω of problem ( 13) can be constructed from the optimizer γ of the infimum problem via…”
Section: A Reweighting Framework With Doubly Non-negative Matricesmentioning
confidence: 99%