2021
DOI: 10.3390/s21124054
|View full text |Cite
|
Sign up to set email alerts
|

HyAdamC: A New Adam-Based Hybrid Optimization Algorithm for Convolution Neural Networks

Abstract: As the performance of devices that conduct large-scale computations has been rapidly improved, various deep learning models have been successfully utilized in various applications. Particularly, convolution neural networks (CNN) have shown remarkable performance in image processing tasks such as image classification and segmentation. Accordingly, more stable and robust optimization methods are required to effectively train them. However, the traditional optimizers used in deep learning still have unsatisfactor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(14 citation statements)
references
References 41 publications
0
14
0
Order By: Relevance
“…The gradient descent algorithm is mostly used for these purposes [ 21 ]. In our study, we specifically used the Adam algorithm , which is one of the most widely used optimization algorithms due to advantages such as high efficiency and low resource consumption [ 22 ].…”
Section: Materials Methods and Resultsmentioning
confidence: 99%
“…The gradient descent algorithm is mostly used for these purposes [ 21 ]. In our study, we specifically used the Adam algorithm , which is one of the most widely used optimization algorithms due to advantages such as high efficiency and low resource consumption [ 22 ].…”
Section: Materials Methods and Resultsmentioning
confidence: 99%
“…Each model was trained with different optimizers by varying the learning rates and the best-fit hyperparameters were chosen as the ones with the highest validation accuracy. Based on previous work in image classification, the four most suitable optimizers, namely, Adam, RMSProp, SGD, and Nadam were chosen for tuning each model [ 70 , 71 , 72 ]. The learning rate was varied from 10 −2 to 10 −5 to determine the best fit.…”
Section: Methodsmentioning
confidence: 99%
“…Two AFs are considered for the output layer: Sigmoid [ 79 , 81 , 88 , 89 ] and Softmax [ 90 , 91 ]. Lastly, three optimizers are used, namely, Adam [ 92 , 93 , 94 ], RMSProp [ 90 , 95 , 96 ], and SGD [ 97 , 98 ].…”
Section: Methodsmentioning
confidence: 99%