2017
DOI: 10.1609/aaai.v31i1.10647
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Hyperparameter Optimization for Deep Learning Algorithms Using Deterministic RBF Surrogates

Abstract: Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 88 publications
(29 citation statements)
references
References 16 publications
0
29
0
Order By: Relevance
“…The HORD algorithm was used to optimize various hyperparameters, such as batch size and initial learning rate. It was applied to the DNN-BB and the DNN-PBPK model separately.…”
Section: Materials and Methodsmentioning
confidence: 99%
“…The HORD algorithm was used to optimize various hyperparameters, such as batch size and initial learning rate. It was applied to the DNN-BB and the DNN-PBPK model separately.…”
Section: Materials and Methodsmentioning
confidence: 99%
“…Dataset prediction accuracy was obtained by training a XGBoost classifier on the training set and applying to the validation set. We utilized the HORD algorithm (Regis & Shoemaker, 2013; Ilievski et al, 2017; Eriksson et al, 2020) to find the best set of hyperparameters using the validation set (Table 2). The trained DNN after 1000 epochs was utilized for subsequent analyses.…”
Section: Methodsmentioning
confidence: 99%
“…Automatic HPO facilitates fair comparisons [20]. The HPO problem has a long history dating back to the 1990 [21]. In addition, it was determined that different hyperparameter configurations were the best results for different datasets in the early stages.…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%