1998
DOI: 10.1287/opre.46.5.710
|View full text |Cite
|
Sign up to set email alerts
|

Stopping Rules for a Class of Sampling-Based Stochastic Programming Algorithms

Abstract: Public reporting burden tor this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

1999
1999
2019
2019

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 37 publications
0
16
0
Order By: Relevance
“…We note, however, that MCMC-IS can be paired with many other stochastic optimization algorithms, such as the sample average approximation method ), stochastic decomposition (Higle and Sen (1991)), progressive hedging (Rockafellar and Wets (1991)), augmented Lagrangian methods (Parpas and Rustem (2007)), variants of Benders' decomposition (Birge and Louveaux (2011)), or even approximate dynamic programming (Powell (2007)). More generally, we also expect MCMC-IS to yield similar benefits in sampling-based approaches for developing stopping rules (Bayraksan and Pierre-Louis (2012), Morton (1998)), chance-constrained programming (Barrera et al (2014), Watson et al (2010)), and risk-averse stochastic programming (Kozmık and Morton (2013), Shapiro (2009)). …”
Section: Introductionmentioning
confidence: 93%
“…We note, however, that MCMC-IS can be paired with many other stochastic optimization algorithms, such as the sample average approximation method ), stochastic decomposition (Higle and Sen (1991)), progressive hedging (Rockafellar and Wets (1991)), augmented Lagrangian methods (Parpas and Rustem (2007)), variants of Benders' decomposition (Birge and Louveaux (2011)), or even approximate dynamic programming (Powell (2007)). More generally, we also expect MCMC-IS to yield similar benefits in sampling-based approaches for developing stopping rules (Bayraksan and Pierre-Louis (2012), Morton (1998)), chance-constrained programming (Barrera et al (2014), Watson et al (2010)), and risk-averse stochastic programming (Kozmık and Morton (2013), Shapiro (2009)). …”
Section: Introductionmentioning
confidence: 93%
“…Therefore, the question that arises is how the estimation error that results from using a finite sample size can be taken into account when devising criteria for stopping the algorithm. Some early discussion about stopping criteria when using sampling methods can be found in [24] and [37]. In Sect.…”
Section: A Discussion On the Stopping Criterionmentioning
confidence: 99%
“…Thus a stopping criterion should be introduced. Some stopping criteria are introduced in [43,44]. Here a different stopping criterion is suggested: it lets the algorithm stop as soon as the stochasticity becomes larger than the differences in the underlying landscape.…”
Section: Particle Swarmmentioning
confidence: 99%