2021
DOI: 10.1016/j.ifacol.2021.08.427
|View full text |Cite
|
Sign up to set email alerts
|

The risk of making decisions from data through the lens of the scenario approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…Hence, referring to Figure 10, in non-degenerate problems the distribution of (s * , V (x * )) is confined to the blue slanted region (but a small portion whose probability is no more than β) whereas in degenerate problems the distribution of (s * , V (x * )) can expand below the lower boundary of the slanted region, while the upper boundary, that sets a limit to V (x * ), is always valid (this latter result is proven in the recent paper Garatti & Campi (2021)).…”
Section: A General Resultsmentioning
confidence: 80%
“…Hence, referring to Figure 10, in non-degenerate problems the distribution of (s * , V (x * )) is confined to the blue slanted region (but a small portion whose probability is no more than β) whereas in degenerate problems the distribution of (s * , V (x * )) can expand below the lower boundary of the slanted region, while the upper boundary, that sets a limit to V (x * ), is always valid (this latter result is proven in the recent paper Garatti & Campi (2021)).…”
Section: A General Resultsmentioning
confidence: 80%
“…In fact, the bound in ( 3) is proved to be tight for a special class of convex programs called fully supported programs, see [12], [13] for details. For non-convex optimization problems, similar results are derived in [16], [17]. While these probabilistic guarantees for both the convex and non-convex cases provide a fundamental understanding on the solution of the scenario program (2), they alone do not allow to bound the objective value of the original robust optimization.…”
Section: Preliminariesmentioning
confidence: 65%
“…By randomly sampling a finite set of constraints, a scenario program is formulated and a sample-based solution can be obtained. Chance-constrained theorems are then derived for this sample-based solution in terms of the measure of the violating subset of the uncertain constraints for both convex [8], [9], [12]- [15] and non-convex [16], [17] cases by using the concepts of support/essential constraints. In particular, the bounds in [12], [13] have been proved to be tight for a special class of uncertain convex programs called fully supported problems.…”
Section: Introductionmentioning
confidence: 99%
“…Note that η(c) is monotonically decreasing in c [30], so for any d * ρ ≥ c * ρ , we have η(d * ρ ) ≤ η(c * ρ ). Hence, Eq.…”
Section: B1 Proof Of Theoremmentioning
confidence: 99%