2022
DOI: 10.53525/jster.1018213
|View full text |Cite
|
Sign up to set email alerts
|

White Blood Cell Classification Using Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 16 publications
0
5
0
1
Order By: Relevance
“…The performance of the proposed method was evaluated using the Kaggle dataset, resulting in an overall accuracy of 98.55%. Nahzat et al [36] aimed to develop a CNN-based model for the classification of WBCs. They used images of WBCs from the Kaggle dataset to train and evaluate their proposed model, testing it with various optimizers to determine the best performance.…”
Section: Discussionmentioning
confidence: 99%
“…The performance of the proposed method was evaluated using the Kaggle dataset, resulting in an overall accuracy of 98.55%. Nahzat et al [36] aimed to develop a CNN-based model for the classification of WBCs. They used images of WBCs from the Kaggle dataset to train and evaluate their proposed model, testing it with various optimizers to determine the best performance.…”
Section: Discussionmentioning
confidence: 99%
“…Convolution is divided into two independent layers by MobileNetV2. Each channel is individually subjected to a depth-wise convolution in the first layer, known as the depth-wise convolution [37]. The outputs of the depth-wise convolution are combined in the second layer, referred to as the point-wise convolution, using a 1 × 1 convolution.…”
Section: Fine-tuned Mobilenetv2mentioning
confidence: 99%
“…The Adamax optimizer is a variation of the Adam optimizer, which excels at solving problems with huge parameter spaces and sparse gradients [36]. An optimization approach called Adaptive Gradient (AdaGrad) adjusts the learning rate of each parameter based on previous gradients [37]. The fundamental idea underlying AdaGrad is to provide each parameter with a unique learning rate dependent on the size of its previous gradient.…”
Section: Different Optimizers Employed For Best Fine-tuned Tl Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Nahzat et al [19] designed a CNN-based model using the Kaggle BCCD dataset, achieving competitive results. They also focused on five WBC types.…”
Section: Related Workmentioning
confidence: 99%