2018
DOI: 10.1016/j.ejor.2018.03.017
|View full text |Cite
|
Sign up to set email alerts
|

Continuous multi-task Bayesian Optimisation with correlation

Abstract: The version presented here may differ from the published version or, version of record, if you wish to cite this item you are advised to consult the publisher's version. Please see the 'permanent WRAP url' above for details on accessing the published version and note that access may require a subscription.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(19 citation statements)
references
References 20 publications
0
19
0
Order By: Relevance
“…In this section, we adapt the IKG algorithm to our setting, and compare it with GP-C-OCBA. IKG offers a strong benchmark for our method, since it is based on the same GP model and has demonstrated superior sampling efficiency in prior work [21,22]. Knowledge Gradient (KG) [10] is a value-of-information type policy that was originally proposed for the R&S problem and later expanded to global optimization of black-box functions.…”
Section: Integrated Knowledge Gradientmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section, we adapt the IKG algorithm to our setting, and compare it with GP-C-OCBA. IKG offers a strong benchmark for our method, since it is based on the same GP model and has demonstrated superior sampling efficiency in prior work [21,22]. Knowledge Gradient (KG) [10] is a value-of-information type policy that was originally proposed for the R&S problem and later expanded to global optimization of black-box functions.…”
Section: Integrated Knowledge Gradientmentioning
confidence: 99%
“…In the classical R&S setting, where ๐‘ and ๐‘ โ€ฒ are redundant (i.e., there is only a single context), the KG policy operates by evaluating the alternative ๐‘˜ * = arg max ๐‘˜ KG(๐‘˜, ๐‘; ๐‘). To extend this to the contextual Bayesian optimization problem, [8,21,22] each study an integrated (or summed) version of KG, under slightly different problem settings, where either the context space or both alternative-context spaces are continuous. The main differences between these three works are in how they approximate and optimize the integrated KG factor in their respective problem settings.…”
Section: Integrated Knowledge Gradientmentioning
confidence: 99%
See 2 more Smart Citations
“…Rastringin and Ackley function) and no explicit information from the distance between the tasks is exploited. Multi-task Bayesian optimization focuses on solving multiple correlated tasks when the fitness function is expensive [32], for instance when tuning a machine learning algorithm to several datasets or to tune a policy for a robot that depends on the context [13], like a walking controller that depends on the slope. The general idea of Bayesian optimization [3] is to use the previous fitness evaluations to predict the location of the most promising candidate solution, evaluate it, update the predictor, and repeat.…”
Section: Multitask Optimization and Learningmentioning
confidence: 99%