2021
DOI: 10.48550/arxiv.2111.13755
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A survey on multi-objective hyperparameter optimization algorithms for Machine Learning

Abstract: Hyperparameter optimization (HPO) is a necessary step to ensure the best possible performance of Machine Learning (ML) algorithms. Several methods have been developed to perform HPO; most of these are focused on optimizing one performance measure (usually an error-based measure), and the literature on such single-objective HPO problems is vast. Recently, though, algorithms have appeared which focus on optimizing multiple conflicting objectives simultaneously. This article presents a systematic survey of the li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 107 publications
(165 reference statements)
0
2
0
Order By: Relevance
“…For example, in the simple version of our model, there are more than 8600 hyperparameter combinations. Deep learning hyperparameter optimization (HPO) is an active research area [44][45][46][47][48][49]. In such problems, the goal is to maximize/minimize an expensive objective function whose closed form and its gradients are unknown.…”
Section: Bayesian Optimization Methodologymentioning
confidence: 99%
“…For example, in the simple version of our model, there are more than 8600 hyperparameter combinations. Deep learning hyperparameter optimization (HPO) is an active research area [44][45][46][47][48][49]. In such problems, the goal is to maximize/minimize an expensive objective function whose closed form and its gradients are unknown.…”
Section: Bayesian Optimization Methodologymentioning
confidence: 99%
“…Deep neural networks lie at the heart of many of the artificial intelligence applications that are ubiquitous in our society. Over the past several years, methods for training these networks have become more automatic [1,2,3,4,5] but still remain more an art than a science. This paper introduces the high-level concept of general cyclical training as another step in making it easier to optimally train neural networks.…”
Section: Introductionmentioning
confidence: 99%