2022
DOI: 10.1109/access.2022.3141781
|View full text |Cite
|
Sign up to set email alerts
|

DeepQGHO: Quantized Greedy Hyperparameter Optimization in Deep Neural Networks for on-the-Fly Learning

Abstract: Hyperparameter optimization or tuning plays a significant role in the performance and reliability of deep learning (DL). Many hyperparameter optimization algorithms have been developed for obtaining better validation accuracy in DL training. Most state-of-the-art hyperparameters are computationally expensive due to a focus on validation accuracy. Therefore, they are unsuitable for online or on-thefly training applications which require computational efficiency. In this paper, we develop a novel greedy approach… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Anjir A. Chowdhury et al concentrated on the role of hyper-parameter optimization in the performance and reliability of deep learning outcomes [33]. They compared several HPO algorithms to obtain better validation accuracy in DNNs and concluded that most of them are computationally expensive.…”
Section: Hyper-parameter Tuning In MLmentioning
confidence: 99%
“…Anjir A. Chowdhury et al concentrated on the role of hyper-parameter optimization in the performance and reliability of deep learning outcomes [33]. They compared several HPO algorithms to obtain better validation accuracy in DNNs and concluded that most of them are computationally expensive.…”
Section: Hyper-parameter Tuning In MLmentioning
confidence: 99%
“…For instance, Model design techniques emphasize on designing models with a reduced number of parameters without compromising accuracy, thereby enabling it to fit and execute within the available IoT device memory [10]. Also, Model compression techniques such as quantization [11] and pruning [10] can be used. Where quantization takes out the expensive floating-point operations by reducing it to a Q-bit fixed-point number, and pruning removes the unnecessary connections between the model layers.…”
Section: A Machine Learning On Microcontrollersmentioning
confidence: 99%
“…These geometric features are frequently extracted and measured using deep learning techniques or traditional image processing techniques, which are labor-intensive, prone to errors, and have low rates of efficiency. In order to efficiently automate this process [9] [10] [11], our proposed algorithm makes use of computer vision and deep learning techniques to automatically extract each cell's geometric properties from images of biofilms.…”
Section: Introductionmentioning
confidence: 99%