2020
DOI: 10.48550/arxiv.2001.02957
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Expected Improvement versus Predicted Value in Surrogate-Based Optimization

Abstract: Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 14 publications
2
5
0
Order By: Relevance
“…ϵ = 0) is competitive with the best-performing methods. This result was recently confirmed by Rehbach et al [33], who empirically show that solely using the surrogate model's predicted value performs better than EI on most problems with a dimensionality of 5 or more.…”
Section: Acquisitionsupporting
confidence: 53%
See 2 more Smart Citations
“…ϵ = 0) is competitive with the best-performing methods. This result was recently confirmed by Rehbach et al [33], who empirically show that solely using the surrogate model's predicted value performs better than EI on most problems with a dimensionality of 5 or more.…”
Section: Acquisitionsupporting
confidence: 53%
“…Motivated by the success of the sequential exploitative and ϵ-greedy approaches [10,33], we invert the local penalisation strategy and, instead, present a method that samples from within the region that would usually be penalised (6). We empirically show that this approach out-performs recent BBO methods on a range of synthetic functions and two real-world problems.…”
Section: Batch Bayesian Optimisationmentioning
confidence: 99%
See 1 more Smart Citation
“…It typically uses a Gaussian process as a surrogate to approximate the expensive objective. Several acquisition functions exist to guide the search, such as Expected Improvement, Upper Confidence Bound, or Thompson sampling [22], information-theoretic approaches such as Predictive Entropy Search [11], or simply the surrogate itself [5,19]. Though Gaussian processes are typically used on continuous problems, they can be adapted for problems with discrete variables as well.…”
Section: Related Workmentioning
confidence: 99%
“…On top of that, it has been shown that a large dimensionality reduces the importance of choosing a complicated acquisition function [19], which helps us doing a fair comparison between surrogates.…”
Section: Benchmark Problemsmentioning
confidence: 99%