2021
DOI: 10.1007/s00521-020-05549-4
|View full text |Cite
|
Sign up to set email alerts
|

AutoFCL: automatically tuning fully connected layers for handling small dataset

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 41 publications
0
11
0
Order By: Relevance
“…Currently, three main approaches for deep learning systems training are: manual architecture construction, transfer-learning, and neural architecture search [ 39 , 40 , 41 ]. We used transfer learning alongside manually created networks to assess the importance of wavebands triplets in previous research.…”
Section: Methodsmentioning
confidence: 99%
“…Currently, three main approaches for deep learning systems training are: manual architecture construction, transfer-learning, and neural architecture search [ 39 , 40 , 41 ]. We used transfer learning alongside manually created networks to assess the importance of wavebands triplets in previous research.…”
Section: Methodsmentioning
confidence: 99%
“…In this section, we briefly review the most relevant researches on: (1) color detection [ 8 – 14 ]; and (2) flower classification using deep learning [ 3 , 5 , 15 – 20 ].…”
Section: Methodsmentioning
confidence: 99%
“…A comparison of the performance of some deep learning architectures is also made in the work by Basa et al [ 20 ]. In this research, they compared the performance of VGG16, ResNet-50, MobileNet, DenseNet, and NasNet-Mobile combined with a fine-tuning method deploying some datasets including the Oxford-102 Flower dataset.…”
Section: Methodsmentioning
confidence: 99%
“…Victoria et al [29] used the BO algorithm to optimize hyperparameters to enhance the performance of a CNN model. In [15] and [16], Basha et al presented a method for autotuning hyperparameters using Bayesian optimization. The scope of [15] is limited to autotune the FC layers of CNN models while [16] automatically tuned some CNN layers as well.…”
Section: Related Workmentioning
confidence: 99%
“…NAS methods find the structure of the model from scratch and hence require a lot of GPU hours. Basha et al [15] and [16] proposed a method to autotune CNN models for improved transfer learning using a Bayesian optimization technique. Motivated by the success of [15] for tuning FC layers following a CNN, in terms of accuracy and GPU hours, we apply the TPE algorithm, which is a variant of Bayesian optimization, to fine-tune FC layers following a self-supervised model.…”
Section: Introductionmentioning
confidence: 99%