2021
DOI: 10.7763/ijmo.2021.v11.771
|View full text |Cite
|
Sign up to set email alerts
|

Improved Gaussian Process Acquisition for Targeted Bayesian Optimization

Abstract: A black-box optimization problem is considered, in which the function to be optimized can only be expressed in terms of a complicated stochastic algorithm that takes a long time to evaluate. The value returned is required to be sufficiently near to a target value, and uses data that has a significant noise component. Bayesian Optimization with an underlying Gaussian Process is used as an optimization solution, and its effectiveness is measured in terms of the number of function evaluations required to attain t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…We have published extensive results for zero acquisition, including comparisons with "established" acquisition functions. They may be found in [35,36]. Those results are too extensive to reproduce in full in this paper, but we do report a summary of them below.…”
Section: Previous Resultsmentioning
confidence: 75%
See 2 more Smart Citations
“…We have published extensive results for zero acquisition, including comparisons with "established" acquisition functions. They may be found in [35,36]. Those results are too extensive to reproduce in full in this paper, but we do report a summary of them below.…”
Section: Previous Resultsmentioning
confidence: 75%
“…The main findings from [35,36] were (the list below shows the means of run numbers with standard deviations in square brackets): A qualitative summary of previous results is that Block 1 ("traditional") methods have the same performance characteristics as random selection, and that zero acquisitions approximately halves both the "traditional" means and standard deviations run number. In the sections that follow, our current results are compared to our previous results.…”
Section: Previous Resultsmentioning
confidence: 93%
See 1 more Smart Citation
“…On the other side, the availability of gradients in the MLE case (without significant additional computational cost) is an advantage in the implementation of numerical optimization algorithms required for hyperparameter estimation. However, this advantage must be qualified by the recent works on the computational complexity of cross-validation schemes and more precisely the fast computation of gradients of LOO criteria [58,43,50]. Still in Petit [50], an intensive benchmark on analytic functions of different dimensions shows that MLE is often preferable to its competitors (not only in well specified cases but also in case of overestimated regularity), and that the choice of regularity (ν in Matérn class) might be often more important than the estimation of GP hyperparameters.…”
Section: Discussion On the Relative Practical Performance Of The Diff...mentioning
confidence: 99%
“…Hence, as suggested by Petit [50] and in the direct line of Demay et al [44], an interesting compromise (that we also recommend from our experience) is to consider a finite collection of covariance functions (those of Table 1), then estimate the hyperparameters (σ 2 , θ) for each of them, and finally use a validation criterion (different from criterion used for the estimation) to select the best covariance.…”
Section: Usual Covariance Functions and Consideration On A Priori Choicementioning
confidence: 99%