2022
DOI: 10.48550/arxiv.2204.13753
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

High Dimensional Bayesian Optimization with Kernel Principal Component Analysis

Abstract: Bayesian Optimization (BO) is a surrogate-based global optimization strategy that relies on a Gaussian Process regression (GPR) model to approximate the objective function and an acquisition function to suggest candidate points. It is well-known that BO does not scale well for high-dimensional problems because the GPR model requires substantially more data points to achieve sufficient accuracy and acquisition optimization becomes computationally expensive in high dimensions. Several recent works aim at address… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Additionally, there is a notable computational expense during hyperparameter tuning of the surrogate in the high-dimensional case. To mitigate this challenge, employing methods such as REMBO (Wang et al 2016) and ALEBO (Letham et al 2020) or (k)PCA-BO (Raponi et al 2020;Antonov et al 2022) presents an avenue for further reducing the computational cost. These methods operate under the assumption that certain dimensions are more significant than others, consequently reducing the number of tunable hyperparameters.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, there is a notable computational expense during hyperparameter tuning of the surrogate in the high-dimensional case. To mitigate this challenge, employing methods such as REMBO (Wang et al 2016) and ALEBO (Letham et al 2020) or (k)PCA-BO (Raponi et al 2020;Antonov et al 2022) presents an avenue for further reducing the computational cost. These methods operate under the assumption that certain dimensions are more significant than others, consequently reducing the number of tunable hyperparameters.…”
Section: Discussionmentioning
confidence: 99%
“…In Wang et al (2016), the authors use random projection methods to project the high-dimensional inputs to a lower dimensional subspace, ending up by constructing the GP model directly on the lower dimensional space, drastically reducing the number of hyperparameters. Raponi et al (2020) and Antonov et al (2022) use (kernel) Principal Component Analysis on the input space to identify a reduced set of dimensions based on the evaluated samples. Afterwards, the surrogate model is trained in this reduced dimensional space.…”
Section: High-dimensional Problemsmentioning
confidence: 99%