2010
DOI: 10.1016/j.jspi.2010.04.018
|View full text |Cite
|
Sign up to set email alerts
|

Convergence properties of the expected improvement algorithm with fixed mean and covariance functions

Abstract: This paper is accepted for publication in the Journal of Statistical Planning and Inference. The final publisher-generated version (corrected proof) is already available from Elsevier's website (follow the DOI link). The fulltext version available on HAL is an author-generated post-print version.International audienceThis paper deals with the convergence of the expected improvement algorithm, a popular global optimization algorithm based on a Gaussian process model of the function to be optimized. The first re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
117
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 157 publications
(121 citation statements)
references
References 13 publications
4
117
0
Order By: Relevance
“…In particular, a number of approximations have been introduced and it is of interest whether the proposed strategy of selecting points according to the MEI will find good solutions and whether it will converge to the global optimum. Recent work [12] gives positive convergence results when the prior function is a Gaussian Process (GP) with fixed mean and covariance [13]. These results hold under fairly general conditions which apply to the BO algorithm on which our work is based.…”
Section: Bayesian Optimizationsupporting
confidence: 52%
“…In particular, a number of approximations have been introduced and it is of interest whether the proposed strategy of selecting points according to the MEI will find good solutions and whether it will converge to the global optimum. Recent work [12] gives positive convergence results when the prior function is a Gaussian Process (GP) with fixed mean and covariance [13]. These results hold under fairly general conditions which apply to the BO algorithm on which our work is based.…”
Section: Bayesian Optimizationsupporting
confidence: 52%
“…Expected Improvement has also been shown to converge under additional mild assumptions [14], but its main advantage is a closed-form expression, that doesn't require numerical integration:…”
Section: Theoremmentioning
confidence: 99%
“…The minimization of the Objective Function can be performed referring to many techniques, said to be global or local depending on whether they converge to global or local minima. For instance, the following types of methods can be highlighted: -the local gradient-based methods (Tarantola, 2005), which include quasi-Newton and Gauss-Newton families; -the global evolutionary and genetic methods (Hansen, 2006); -the global response surface-based methods (Schonlau, 1997;Villemonteix et al, 2009;Vazquez and Bect, 2010). In the remainder of this paper, we propose to use the known simulation results to approximate p Y(x) (y) from M. Feraille / An Optimization Strategy Based on the Maximization of Matching-Targets' Probability for Unevaluated Results several Gaussian processes, instead of considering a Dirac probability density function.…”
Section: Probabilistic Inverse Problem Frameworkmentioning
confidence: 99%
“…As described in Schonlau (1997), Villemonteix et al (2009) and Vazquez and Bect (2010), Gaussian process properties, with characterization of mean and covariance, can also be used for local or global optimization. In this case, the required OF is approximated by a Gaussian process and an improvement function corresponding to the potential reduction of the OF is defined.…”
Section: Introductionmentioning
confidence: 99%