2010
DOI: 10.1007/978-3-642-13800-3_8
|View full text |Cite
|
Sign up to set email alerts
|

Bandit-Based Estimation of Distribution Algorithms for Noisy Optimization: Rigorous Runtime Analysis

Abstract: We show complexity bounds for noisy optimization, in frameworks in which noise is stronger than in previously published papers[19]. We also propose an algorithm based on bandits (variants of [16]) that reaches the bound within logarithmic factors. We emphasize the differences with empirical derived published algorithms.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
30
0

Year Published

2010
2010
2015
2015

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 15 publications
(33 citation statements)
references
References 22 publications
3
30
0
Order By: Relevance
“…Essentially, the algorithms for which we get a proof have the same dynamics as in the noise-free case, they just use enough resamplings for cancelling the noise. This is consistent with the existing literature, in particular [18] which shows a log-log convergence for an Estimation of Distribution Algorithm with exponentially decreasing step-size and exponentially increasing number of resamplings. In the experimental part, we see that another solution is a polynomially increasing number of resamplings (independently of σ n ; the number of resamplings just smoothly increases with the number of iterations, in a non-adaptive manner), leading to a slower convergence when considering the progress rate per iteration, but the same log-log convergence when considering the progress rate per evaluation.…”
Section: Local Noisy Optimizationsupporting
confidence: 92%
See 3 more Smart Citations
“…Essentially, the algorithms for which we get a proof have the same dynamics as in the noise-free case, they just use enough resamplings for cancelling the noise. This is consistent with the existing literature, in particular [18] which shows a log-log convergence for an Estimation of Distribution Algorithm with exponentially decreasing step-size and exponentially increasing number of resamplings. In the experimental part, we see that another solution is a polynomially increasing number of resamplings (independently of σ n ; the number of resamplings just smoothly increases with the number of iterations, in a non-adaptive manner), leading to a slower convergence when considering the progress rate per iteration, but the same log-log convergence when considering the progress rate per evaluation.…”
Section: Local Noisy Optimizationsupporting
confidence: 92%
“…11 is the convergence in log/log scale. We have shown this property for an exponentially increasing number of resamplings, which is indeed similar to R-EDA [18], which converges with a small number of iterations but with exponentially many resamplings per iteration. In the experimental section 3, we will check what happens in the polynomial case.…”
Section: Theorem 1 Consider the Fitness Functionsupporting
confidence: 62%
See 2 more Smart Citations
“…There are several variants for choosing k, such as: taking a fixed k, or incrementing k as the iteration does, or adapting k during the optimization. The impact of resampling on the convergence rate has been empirically or theoretically investigated in the references [23,1,12,22,21]. We here focus on the adaptation of resampling works from continuous codomains [2] to discrete ones and we cover a broad class of optimizers stated in the next subsection.…”
Section: State Of the Artmentioning
confidence: 99%