2021
DOI: 10.48550/arxiv.2111.04820
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explaining Hyperparameter Optimization via Partial Dependence Plots

Abstract: Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models. However, there is often a lack of valuable insights into the effects of different hyperparameters on the final model performance. This lack of explainability makes it difficult to trust and understand the automated HPO process and its results. We suggest using interpretable machine learning (IML) to gain insights from the experimental data obtained during HPO with Bayesian optimization (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…ML models are mostly black-box in nature, but attempts can be made to explain their ways of generating predictions. Our workflow ( Figure 1 ) utilizes two post-hoc model agnostic methods for model interpretation: the SHapley Additive exPlanations (SHAP) method by Lundberg et al [ 64 ], which helps develop reasoning behind individual predictions of model, and partial dependency plots (PDP) [ 65 ], which are used to represent global relationships between input and output variables. The implementation of both methods was made using an in-house-developed Python wrapper script [ 66 ].…”
Section: Methodsmentioning
confidence: 99%
“…ML models are mostly black-box in nature, but attempts can be made to explain their ways of generating predictions. Our workflow ( Figure 1 ) utilizes two post-hoc model agnostic methods for model interpretation: the SHapley Additive exPlanations (SHAP) method by Lundberg et al [ 64 ], which helps develop reasoning behind individual predictions of model, and partial dependency plots (PDP) [ 65 ], which are used to represent global relationships between input and output variables. The implementation of both methods was made using an in-house-developed Python wrapper script [ 66 ].…”
Section: Methodsmentioning
confidence: 99%
“…These plots visualize the marginal effect of a feature on the predicted outcomes of the model, providing valuable insights into how the model is making its predictions. These plots typically show the relationship between a specific feature and the predicted outcome of the model, with the feature on the x-axis and the predicted outcome on the y-axis (Inouye et al, 2020;Moosbauer et al, 2021).…”
Section: Random Forest Partial Dependence Plotsmentioning
confidence: 99%