2023
DOI: 10.3390/electronics12040964
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Effect of Learning Rate and Batch Size to Implement Transfer Learning for Brain Tumor Classification

Abstract: For classifying brain tumors with small datasets, the knowledge-based transfer learning (KBTL) approach has performed very well in attaining an optimized classification model. However, its successful implementation is typically affected by different hyperparameters, specifically the learning rate (LR), batch size (BS), and their joint influence. In general, most of the existing research could not achieve the desired performance because the work addressed only one hyperparameter tuning. This study adopted a Car… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…In this scenario, larger batch sizes produce higher accuracy compared to smaller sizes. This is because a larger batch size will speed up the network computing process [ [23]]. Smaller regularization and dropout sizes produce higher accuracy.…”
Section: Results and Analysismentioning
confidence: 99%
“…In this scenario, larger batch sizes produce higher accuracy compared to smaller sizes. This is because a larger batch size will speed up the network computing process [ [23]]. Smaller regularization and dropout sizes produce higher accuracy.…”
Section: Results and Analysismentioning
confidence: 99%
“…The goal is to acquire numerous kernel sizes inside the network rather than sequentially stacking them and ordering each to function at the same stage. Szegedy et al ( 2016 ) created the first version of the inception architecture in 2012, called GoogLeNet. The suggested model has 27 levels, including inception layers.…”
Section: Methodsmentioning
confidence: 99%
“…During the retraining, the weights of the network are kept unchanged and only the parameters of the replaced layers are modified. Alternatively, the whole model can be retrained (unfreezing all layers) with a reduced value of the learning rate, Usmani et al [111].…”
Section: Transfer Learning In Fire Detection With Deep Learning Techn...mentioning
confidence: 99%