2023
DOI: 10.1609/aaai.v37i9.26335
|View full text |Cite
|
Sign up to set email alerts
|

High-Dimensional Dueling Optimization with Preference Embedding

Abstract: In many scenarios of black-box optimization, evaluating the objective function values of solutions is expensive, while comparing a pair of solutions is relatively cheap, which yields the dueling black-box optimization. The side effect of dueling optimization is that it doubles the dimension of solution space and exacerbates the dimensionality scalability issue of black-box optimization, e.g., Bayesian optimization. To address this issue, the existing dueling optimization methods fix one solution when dueling t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Bayesian optimization (BO) [31,25] under the offline scenario explicitly assesses uncertainty by the Gaussian process surrogate model and leverages it in the acquisition function to alleviate the performance degradation in optimization. Although the latest BO studies have made significant progress in addressing the curse of dimensionality under certain conditions [26,36,4], they still have difficulties in addressing large-scale offline data. Some existing methods propose to sample solutions from learned generative models and introduce regularization in the sampling process.…”
Section: Introductionmentioning
confidence: 99%
“…Bayesian optimization (BO) [31,25] under the offline scenario explicitly assesses uncertainty by the Gaussian process surrogate model and leverages it in the acquisition function to alleviate the performance degradation in optimization. Although the latest BO studies have made significant progress in addressing the curse of dimensionality under certain conditions [26,36,4], they still have difficulties in addressing large-scale offline data. Some existing methods propose to sample solutions from learned generative models and introduce regularization in the sampling process.…”
Section: Introductionmentioning
confidence: 99%