2019
DOI: 10.1016/j.knosys.2019.04.019
|View full text |Cite
|
Sign up to set email alerts
|

Hyperparameter optimization of deep neural network using univariate dynamic encoding algorithm for searches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 107 publications
(45 citation statements)
references
References 14 publications
0
44
0
1
Order By: Relevance
“…Hence, it is surprising that many DL applications in GP have not paid enough attention to this problem (Ma et al, 2017;Montesinos-López et al, 2018b;Montesinos-López et al, 2019b). Several approaches have been proposed for hyperparameter tuning (e.g., Bellot et al, 2018;Cho and Hegde, 2019;Le et al, 2019;Rajaraman et al, 2019;Yoo, 2019). Here, DL architectures were optimized using Talos (Autonomia Talos, 2019), which works combining all parameters in a grid.…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%
“…Hence, it is surprising that many DL applications in GP have not paid enough attention to this problem (Ma et al, 2017;Montesinos-López et al, 2018b;Montesinos-López et al, 2019b). Several approaches have been proposed for hyperparameter tuning (e.g., Bellot et al, 2018;Cho and Hegde, 2019;Le et al, 2019;Rajaraman et al, 2019;Yoo, 2019). Here, DL architectures were optimized using Talos (Autonomia Talos, 2019), which works combining all parameters in a grid.…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%
“…Differently from grid search and random search, Bayesian optimization can save much time for determining the optimal hyper-parameters of the deep neural network [36]. The process of determining the optimal hyper-parameters of network is considered as a global optimization problem [37]. The objective function of the optimization problem can be expressed as Eq.…”
Section: B Bayesian Optimizationmentioning
confidence: 99%
“…In addition to hyperparameters, the neural network architecture should be optimized for better performance. [38] In this study, because the ANN was hand-tuned by multiple trial-and-error sessions, a more effective hyperparameter set might be found by other, more sophisticated optimization methods. Finally, because the dataset was All rights reserved.…”
Section: Limitationsmentioning
confidence: 99%