Finding the best configuration of a neural network to solve a problem has been challenging given the numerous possibilities of values of the hyper-parameters. Thus, tuning of hyper-parameters is one important approach and researchers suggest doing this automatically. However, it is important to verify when it is suitable to perform automated tuning which is usually very costly financially and also in terms of hardware infrastructure. In this study, we analyze the advantages of using a hyper-parameter optimization framework as a way of optimizing the automated search for hyper-parameters of a neural network. To achieve this goal, we used data from an experiment related to temperature prediction of computers embedded in unmanned aerial vehicles (UAVs), and the models Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) to perform these predictions. In addition, we compare the hyper-parameter optimization framework to the hyper-parameter exhaustive search technique varying the size of the training dataset. Results of our experiment shows that designing a model using a hyper-parameter optimizer can be up to 36.02% better than using exhaustive search, in addition to achieving satisfactory results with a reduced dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.