2022
DOI: 10.1007/s10462-022-10359-2
|View full text |Cite
|
Sign up to set email alerts
|

A survey on multi-objective hyperparameter optimization algorithms for machine learning

Abstract: Hyperparameter optimization (HPO) is a necessary step to ensure the best possible performance of Machine Learning (ML) algorithms. Several methods have been developed to perform HPO; most of these are focused on optimizing one performance measure (usually an error-based measure), and the literature on such single-objective HPO problems is vast. Recently, though, algorithms have appeared that focus on optimizing multiple conflicting objectives simultaneously. This article presents a systematic survey of the lit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(15 citation statements)
references
References 155 publications
0
15
0
Order By: Relevance
“…In addition to brute-force hyperparameter optimization methods such as grid search and random search [39], other techniques such as Bayesian optimization [40][41][42] and Tree Parzen Estimators [43] exist, although they are not commonly used in the context of HSIC. For a more detailed overview of important hyperparameter optimization methods, see reviews [44,45].…”
Section: Related Workmentioning
confidence: 99%
“…In addition to brute-force hyperparameter optimization methods such as grid search and random search [39], other techniques such as Bayesian optimization [40][41][42] and Tree Parzen Estimators [43] exist, although they are not commonly used in the context of HSIC. For a more detailed overview of important hyperparameter optimization methods, see reviews [44,45].…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we employ grid search on the small number of candidates to maximize G 𝑇 while maintaining nearly state-of-the-art accuracy. Alternatively, it is possible to search for an optimal G 𝑇 more efficiently by applying advanced multi-objective hyperparameter optimization approaches [41], such as multi-objective Bayesian optimization. The group number tuning discussed here also applies to the grouping techniques we use in other modules.…”
Section: Gtcn(𝐻mentioning
confidence: 99%
“…Table 5 displays the averaged Friedman ranks over all datasets for OLS, GEM 11 and for the best stop criteria (according to the results of Table 4) among AIC, AICc, BIC, HQIC and gMDL for FSR, PCR, PLS, BOOST and RBOOST, as well as the novel stop criterion ICM for BOOST and RBOOST, taking into account some base-learners (Ridge, SVR and RFR) and several sampling strategies (GS, RS, BO, PSO and HB). In this table, one can observe that both BOOST and RBOOST outperform the rest of the methods, for the best stop criterion among AIC, AICc, BIC, HQIC and gMDL and also for the novel stop criterion ICM (see the last row of the last four columns of Table 5).…”
Section: Performance Analysismentioning
confidence: 99%
“…However, only few of them do. In fact, AutoML systems mainly focus on other parts of the process, such as parallelizing or distributing the process, or on improving the performance of exploring encouraging hyperparameter configuration trials, or even on multioutput HPO [11]. The few systems that include ensemble in their scenarios do not perform an exhaustive study on them; instead they just contemplate the option whether to perform ensemble or not.…”
Section: Introductionmentioning
confidence: 99%