2023
DOI: 10.1007/978-981-19-9819-5_21
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Algorithm for Classification of Alopecia Areata from Human Scalp Hair Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Of the images collected, 70% was used for training and the remaining 30% was used for testing. The considered existing models were the AB-MTEDeep [16], RICAP-CNN [17], EF-GAN-VGG16 [19], and IAGAN [22], which were applied to the Figaro1k and Dermnet databases for the AA classification, using the same proportions for the training and testing of the models. Figure 3 shows the generator and discriminator loss curves in the AA-GAN model, while Figure 4 MTEDeep was 17.6% higher than RICAP-CNN's, 14% higher than the EF-GAN-VGG16's, 8.6% higher than the IAGAN's, and 1.9% higher than that of the AB-MTEDeep model.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Of the images collected, 70% was used for training and the remaining 30% was used for testing. The considered existing models were the AB-MTEDeep [16], RICAP-CNN [17], EF-GAN-VGG16 [19], and IAGAN [22], which were applied to the Figaro1k and Dermnet databases for the AA classification, using the same proportions for the training and testing of the models. Figure 3 shows the generator and discriminator loss curves in the AA-GAN model, while Figure 4 MTEDeep was 17.6% higher than RICAP-CNN's, 14% higher than the EF-GAN-VGG16's, 8.6% higher than the IAGAN's, and 1.9% higher than that of the AB-MTEDeep model.…”
Section: Resultsmentioning
confidence: 99%
“…But, if the network depth is increased, degradation problem occurrs causing overfitting. Therefore, the Attention-based Balanced Multi-Tasking Ensembling Deep (AB-MTEDeep) model was developed, which adopted cross-residual learning in the LSTM with FRCNN to improve efficiency [16]. However, the robustness and generalization ability of the deep learning models largely depends on the amount of data available in the training phase.…”
Section: Introductionmentioning
confidence: 99%