2019 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE) 2019
DOI: 10.1109/wiecon-ece48653.2019.9019943
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional Neural Network Models for Content Based X-Ray Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…CosFace [14] used the estimated clustering uncertainty of unlabelled samples to adjust the loss function weight to reduce the overlapping-identity label noise; however, it requires balanced labelled and unlabelled samples to estimate clustering uncertainty accurately, which is a major limitation. There are a few methods based on Sensors 2022, 22, 9967 2 of 12 transfer learning [15,16] and meta-learning [17,18]. Ahn et al [15] proposed a hierarchical unsupervised feature extractor, which has a convolutional autoencoder on top of a pretrained convolutional neural network (CNN).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…CosFace [14] used the estimated clustering uncertainty of unlabelled samples to adjust the loss function weight to reduce the overlapping-identity label noise; however, it requires balanced labelled and unlabelled samples to estimate clustering uncertainty accurately, which is a major limitation. There are a few methods based on Sensors 2022, 22, 9967 2 of 12 transfer learning [15,16] and meta-learning [17,18]. Ahn et al [15] proposed a hierarchical unsupervised feature extractor, which has a convolutional autoencoder on top of a pretrained convolutional neural network (CNN).…”
Section: Introductionmentioning
confidence: 99%
“…Ahn et al [15] proposed a hierarchical unsupervised feature extractor, which has a convolutional autoencoder on top of a pretrained convolutional neural network (CNN). Arti Pet al [16] fine-tuned the pre-trained AlexNet [2] and GoogleNet [19] for X-ray image classification. Maicas et al [18] designed an unsupervised pretext task for meta-learning and then trained the model for medical image classification.…”
Section: Introductionmentioning
confidence: 99%