2010
DOI: 10.1002/nav.20422
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive random search for continuous simulation optimization

Abstract: Abstract:We present, analyze, and compare three random search methods for solving stochastic optimization problems with uncountable feasible regions. Our adaptive search with resampling (ASR) approach is a framework for designing provably convergent algorithms that are adaptive and may consequently involve local search. The deterministic and stochastic shrinking ball (DSB and SSB) approaches are also convergent, but they are based on pure random search with the only difference being the estimator of the optima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 59 publications
(23 citation statements)
references
References 33 publications
0
23
0
Order By: Relevance
“…These algorithms are designed primarily for discretevalued simulation optimization problems. But some of them can also work with continuous decision variables (Andradóttir and Prudius, 2010). The first class of algorithms can be roughly classified as random search algorithms (Andradóttir, 1996(Andradóttir, , 2006Alrefaei and Andradóttir, 1999;Prudius and Andradóttir, 2012).…”
Section: Black-box Search Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…These algorithms are designed primarily for discretevalued simulation optimization problems. But some of them can also work with continuous decision variables (Andradóttir and Prudius, 2010). The first class of algorithms can be roughly classified as random search algorithms (Andradóttir, 1996(Andradóttir, , 2006Alrefaei and Andradóttir, 1999;Prudius and Andradóttir, 2012).…”
Section: Black-box Search Methodsmentioning
confidence: 99%
“…The number of simulation replications is required approach infinity as the iteration goes to infinity. More adaptive random search algorithms such as R-BEESE (Andradóttir and Prudius, 2009) and random search with adaptive resampling (Andradóttir and Prudius, 2010), introduce a local search component to improve the finite-time performance of random search. By imposing relatively mild conditions on the distribution used to sample new solutions, the random search type of algorithms can achieve global convergence with probability 1.…”
Section: Black-box Search Methodsmentioning
confidence: 99%
“…They prove that their method is globally convergent in probability. Andradóttir and Prudius [16] present the Adaptive Search with Resampling (ASR) method and prove that it is globally convergent w.p.1. Their method includes both sampling and resampling steps (similar to the approach of Yakowitz and Lugosi [82]), but the search is adaptive, only promising sampled points are "accepted" for further consideration (and hence additional observations are not collected at points that are not promising), and the estimated optimal solution is the best point sampled so far.…”
Section: Continuous Simulation Optimizationmentioning
confidence: 99%
“…Baumert and Smith [19] discuss at what rate the distance should decrease in order for the method to converge in probability. Their work was continued by Andradóttir and Prudius [16] who provide further analysis of the (deterministic) shrinking ball method of Baumert and Smith [19], develop and analyze the stochastic shrinking ball method, and provide numerical results.…”
Section: Continuous Simulation Optimizationmentioning
confidence: 99%
“…Many globally convergent random search algorithms have been proposed to solve optimization-via-simulation (OvS) problems, e.g., stochastic ruler of Yan and Mukai (1992), nested partition of Shi andÓlafsson (2000), model reference method of Hu et al (2007Hu et al ( , 2008, and the shrinking ball method of Andradóttir and Prudius (2010). In every iteration of these algorithms, a sampling distribution needs to be constructed based on all the information collected through the last iteration and it is used to guide the search effort in the current iteration.…”
Section: Introductionmentioning
confidence: 99%