2020
DOI: 10.1287/moor.2019.1043
|View full text |Cite
|
Sign up to set email alerts
|

On the Scenario-Tree Optimal-Value Error for Stochastic Programming Problems

Abstract: Stochastic programming problems generally lead to large-scale programs if the number of random outcomes is large or if the problem has many stages. A way to tackle them is provided by scenario-tree generation methods, which construct approximate problems from a reduced subset of outcomes. However, it is well known that the number of scenarios required to keep the approximation error within a given tolerance grows rapidly with the number of random parameters and stages. For this reason, to limit the fast growth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 59 publications
(45 reference statements)
0
3
0
Order By: Relevance
“…The objective function is indeed important to consider since, for instance, a risk-averse problem featuring a logutility function might require a different scenario set than its risk-neutral version as it might be sensitive to different aspects of the distribution. Although the vast majority of methods used for scenario generation fall in the distribution-driven category, as it has been the only predominant one historically, approaches falling in the problem-driven category have been recently studied in [Henrion and Römisch, 2018, Fairbrother et al, 2019, Keutchayan et al, 2020 as a mean to provide more efficient procedures to generate scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…The objective function is indeed important to consider since, for instance, a risk-averse problem featuring a logutility function might require a different scenario set than its risk-neutral version as it might be sensitive to different aspects of the distribution. Although the vast majority of methods used for scenario generation fall in the distribution-driven category, as it has been the only predominant one historically, approaches falling in the problem-driven category have been recently studied in [Henrion and Römisch, 2018, Fairbrother et al, 2019, Keutchayan et al, 2020 as a mean to provide more efficient procedures to generate scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…It is a process of investigating the impact of variations in the model's input parameters on the resulting output. In the work [33], a concrete theoretical foundation for assessing the optimal-value error is proposed, which denotes the difference between the optimal value of the original SP problem and that of the approximated problem generated using different scenario-trees. The study highlights that sub-optimal discretization at a node in the scenario-tree generation process contributes to the optimal-value error, and suggests that suitable scenario-trees can be designed to numerically integrate specific classes of functions determined by the problem's structure.…”
mentioning
confidence: 99%
“…context) of an instance also influence the landscape of the objective function. In contrast, the approaches in the problem-oriented paradigm endeavor to involve problem-specific properties in scenario reduction [77][78][79][80]. For example, Keutchayan et al [80] attempt to search top-K scenarios with their objective values separately approximating the expected objective values derived from the K scenario subsets, in which the top-K scenarios exist respectively.…”
Section: Learning For Stochastic Integer Programsmentioning
confidence: 99%
“…On the other hand, it can be found that the results delivered by CVAE-SIPA are generally superior or comparable to those by CVAE-SIP, suggesting a better cross-distribution generalization. In our training, we do not assume explicit dependencies between the context and scenarios, following the settings in [21,24,79]. However, we would like to show that the trained networks can well generalize to the dependent settings.…”
Section: Generalization Across Distributions and Dependenciesmentioning
confidence: 99%