2020
DOI: 10.48550/arxiv.2007.00925
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

High Dimensional Bayesian Optimization Assisted by Principal Component Analysis

Abstract: Bayesian Optimization (BO) is a surrogate-assisted global optimization technique that has been successfully applied in various fields, e.g., automated machine learning and design optimization. Built upon a so-called infill-criterion and Gaussian Process regression (GPR), the BO technique suffers from a substantial computational complexity and hampered convergence rate as the dimension of the search spaces increases. Scaling up BO for high-dimensional optimization problems remains a challenging task. In this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…Additionally, there is a notable computational expense during hyperparameter tuning of the surrogate in the high-dimensional case. To mitigate this challenge, employing methods such as REMBO (Wang et al 2016) and ALEBO (Letham et al 2020) or (k)PCA-BO (Raponi et al 2020;Antonov et al 2022) presents an avenue for further reducing the computational cost. These methods operate under the assumption that certain dimensions are more significant than others, consequently reducing the number of tunable hyperparameters.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, there is a notable computational expense during hyperparameter tuning of the surrogate in the high-dimensional case. To mitigate this challenge, employing methods such as REMBO (Wang et al 2016) and ALEBO (Letham et al 2020) or (k)PCA-BO (Raponi et al 2020;Antonov et al 2022) presents an avenue for further reducing the computational cost. These methods operate under the assumption that certain dimensions are more significant than others, consequently reducing the number of tunable hyperparameters.…”
Section: Discussionmentioning
confidence: 99%
“…In Wang et al (2016), the authors use random projection methods to project the high-dimensional inputs to a lower dimensional subspace, ending up by constructing the GP model directly on the lower dimensional space, drastically reducing the number of hyperparameters. Raponi et al (2020) and Antonov et al (2022) use (kernel) Principal Component Analysis on the input space to identify a reduced set of dimensions based on the evaluated samples. Afterwards, the surrogate model is trained in this reduced dimensional space.…”
Section: High-dimensional Problemsmentioning
confidence: 99%
“…However, these methods assume knowledge of the objective function, which is not available in the case of human satisfaction. On the other hand, the approaches in [16,17] project the original solution space into a low dimensional space using PCA or a variational autoencoder. Since the dimension reduction in these methods does not utilize objective function information, the low dimensional space may not contain the optimal solution and, therefore, it can introduce bias.…”
Section: Introductionmentioning
confidence: 99%