2021
DOI: 10.48550/arxiv.2105.05806
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

High-Dimensional Experimental Design and Kernel Bandits

Abstract: In recent years methods from optimal linear experimental design have been leveraged to obtain state of the art results for linear bandits. A design returned from an objective such as G-optimal design is actually a probability distribution over a pool of potential measurement vectors. Consequently, one nuisance of the approach is the task of converting this continuous probability distribution into a discrete assignment of N measurements. While sophisticated rounding techniques have been proposed, in d dimension… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 3 publications
0
6
0
Order By: Relevance
“…Remark 4. Our sample complexity guarantees are directly related to the eigenvalue decay of the underlying kernel function, rather than the empirical kernel matrix as studied in previous works [7,42]. Although one can also provide an instance dependent bound as in Theorem 1, the worst-case sample complexity bound in Theorem 2 provides insightful characterizations of the sample complexity in terms of eigenvalue decay.…”
Section: Pure Exploration In Rkhsmentioning
confidence: 96%
See 3 more Smart Citations
“…Remark 4. Our sample complexity guarantees are directly related to the eigenvalue decay of the underlying kernel function, rather than the empirical kernel matrix as studied in previous works [7,42]. Although one can also provide an instance dependent bound as in Theorem 1, the worst-case sample complexity bound in Theorem 2 provides insightful characterizations of the sample complexity in terms of eigenvalue decay.…”
Section: Pure Exploration In Rkhsmentioning
confidence: 96%
“…Learning with model misspecifications was recently introduced in bandit learning, with the primary emphasis placed on the regret minimization problem [16,28,14]. A very recent independent work studies pure exploration in kernel bandits with misspecifications [7]. They propose a robust estimator that works in high-dimensional spaces and also explore the idea of low-dimensional embeddings through regularized least squares.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…We aim to provide sample complexity/error probability guarantees with respect to a hypothesis class that is rich enough to allow us to identify an -optimal arm. [3,22] recently study pure exploration with misspecifications in the fixed confidence setting; we additionally provide the first analysis for the fixed budget setting. The model selection criterion also complicates the analysis and is not covered in previous work.…”
Section: Model Selection With Misspecificationsmentioning
confidence: 99%