2016
DOI: 10.48550/arxiv.1603.06288
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-fidelity Gaussian Process Bandit Optimisation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(14 citation statements)
references
References 0 publications
0
14
0
Order By: Relevance
“…The Maximum Information Gain. As in previous works on GP (Chowdhury and Gopalan, 2017;Kandasamy et al, 2016), our results will depend on the maximum information gain (Srinivas et al, 2009) between function measurements and the function values, defined as below: Definition 1. Suppose A ⊆ X is a subset of feature space, and à = {x 1 , ..., x n } ⊆ A is a finite subset of A.…”
Section: The Gaussian Process Back Endmentioning
confidence: 99%
See 1 more Smart Citation
“…The Maximum Information Gain. As in previous works on GP (Chowdhury and Gopalan, 2017;Kandasamy et al, 2016), our results will depend on the maximum information gain (Srinivas et al, 2009) between function measurements and the function values, defined as below: Definition 1. Suppose A ⊆ X is a subset of feature space, and à = {x 1 , ..., x n } ⊆ A is a finite subset of A.…”
Section: The Gaussian Process Back Endmentioning
confidence: 99%
“…We build our work on GP-UCB (Srinivas et al, 2009), a method for optimizing unknown functions under the Gaussian process (GP) assumption by optimizing the Upper Confidence Bound (UCB). Closest to our setting is a line of recent research on multi-fidelity GP optimization (Kandasamy et al, 2016(Kandasamy et al, , 2017Sen et al, 2018), which assumes that we can query the target functions at multiple fidelities of different costs and precisions. We detail the relation and difference of our setting with multi-fidelity optimization in Section 3.6.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the multi-fidelity setting has been theoretically studied in online problems [40,1,33,18]. In some recent works [19,16,17], UCB like algorithms with Bayesian Gaussian process assumptions on f have been analyzed in a multi-fidelity black-box optimization setting. Also relevant to this work are the bandit based techniques for hyper-parameter optimization such as [28,14].…”
Section: Related Workmentioning
confidence: 99%
“…We use multi-fidelity versions of commonly used benchmark functions in the black-box optimization literature. These multi-fidelity versions have been previously used in [17,34].…”
Section: A More On Mfpoo (Algorithm 2)mentioning
confidence: 99%
See 1 more Smart Citation