2022
DOI: 10.48550/arxiv.2206.05447
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Accuracy of Interpretability Measures in Hyperparameter Optimization via Bayesian Algorithm Execution

Abstract: Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which lead to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…In order to reduce the number of function queries, our MULTIPOINT-BAX method uses techniques from an information-based BAX method, InfoBAX [32,58,59], to make targeted queries that maximize the mutual-information between O A and the next observation y t . The InfoBAX procedure is a sequential algorithm that seeks to maximize the acquisition function, defined here as the expected information gain (EIG) about O A upon observing y t .…”
Section: Appendix a Experimental Setupmentioning
confidence: 99%
“…In order to reduce the number of function queries, our MULTIPOINT-BAX method uses techniques from an information-based BAX method, InfoBAX [32,58,59], to make targeted queries that maximize the mutual-information between O A and the next observation y t . The InfoBAX procedure is a sequential algorithm that seeks to maximize the acquisition function, defined here as the expected information gain (EIG) about O A upon observing y t .…”
Section: Appendix a Experimental Setupmentioning
confidence: 99%
“…The field of human-centered HPO/AutoML has gained increasing traction in the last years with approaches targeted at explaining the hyperparameter optimization process to increase the trust in automated tools (Pfisterer, Thomas, and Bischl 2019;Moosbauer et al 2021Moosbauer et al , 2022Segel et al 2023). This also includes concrete tools developed to help a user interpret the results such as XAutoML (Zöller et al 2022) or DeepCave (Sass et al 2022).…”
Section: Related Workmentioning
confidence: 99%
“…With this paper, we propose an interactive humancentered HPO (Pfisterer, Thomas, and Bischl 2019;Souza et al 2021;Moosbauer et al 2021;Hvarfner et al 2022;Moosbauer et al 2022;Francia, Giovanelli, and Pisano 2023;Segel et al 2023;Mallik et al 2023) approach for MO-ML algorithms that frees users from choosing a predefined quality indicator suitable for their needs by learning one tailored towards them based on feedback. To achieve this, it first learns the desired Pareto front shape from the user in a short interactive session and then starts a corresponding HPO process optimizing towards the previously…”
Section: Introductionmentioning
confidence: 99%
“…One key challenge of incorporating BO methods in R&D is that these methods are typically considered as black boxes with limited explainability and interpretability, 35 hindering their widespread adoption. Additionally, when the search space is large, researchers face difficulties in visualizing and understanding the way that the parameters influence the objectives.…”
Section: Introductionmentioning
confidence: 99%