2022
DOI: 10.1109/access.2022.3181225
|View full text |Cite
|
Sign up to set email alerts
|

FixCaps: An Improved Capsules Network for Diagnosis of Skin Cancer

Abstract: The early detection of skin cancer substantially improves the five-year survival rate of patients. It is often difficult to distinguish early malignant tumors from skin images, even by expert dermatologists. Therefore, several classification methods of dermatoscopic images have been proposed, but they have been found to be inadequate or defective for skin cancer detection, and often require a large amount of calculations. This study proposes an improved capsule network called FixCaps for dermoscopic image clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(19 citation statements)
references
References 33 publications
1
18
0
Order By: Relevance
“…It can be seen from Table 5 that our proposed model was improved by 2.42 percentage points and 1.57 percentage points, respectively, compared with the two DenseNet201 and ConvNeXt_L baseline models. Compared with CNN [54], IM-CNN [22] and IRv2-RA [11], our proposed model outperformed them in terms of accuracy by 9.31%, 1.89% and 0.19%, respectively, but compared with FixCaps [15], our proposed model was 1.2% lower in terms of accuracy. [11] 93.47 FixCaps [15] 96.49 IM-CNN [22] 95.10 CNN [54] 85.98 Ours 95.29…”
Section: The Second Datasetmentioning
confidence: 82%
See 2 more Smart Citations
“…It can be seen from Table 5 that our proposed model was improved by 2.42 percentage points and 1.57 percentage points, respectively, compared with the two DenseNet201 and ConvNeXt_L baseline models. Compared with CNN [54], IM-CNN [22] and IRv2-RA [11], our proposed model outperformed them in terms of accuracy by 9.31%, 1.89% and 0.19%, respectively, but compared with FixCaps [15], our proposed model was 1.2% lower in terms of accuracy. [11] 93.47 FixCaps [15] 96.49 IM-CNN [22] 95.10 CNN [54] 85.98 Ours 95.29…”
Section: The Second Datasetmentioning
confidence: 82%
“…For a fair comparison with the other models, we divided the dataset in two ways. In the first way, 828 skin disease images were randomly extracted as the test set, which was the same as the dataset division of the models IRv2-RA [11], FixCaps [15], etc. In the second way, we randomly divided the training set and the test set according to a ratio of 8:2.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed ensemble of seven predictors has been evaluated on the ISIC-2018 publicly available dataset and yields better performance as compared to existing methods. The early detection of skin cancer has been performed using an improved capsule network (CapsNet) namely FixCaps in [35]. The proposed method obtained a larger receptive field as compared to the baseline CapsNet with a large kernel size of 31*31 not only improved its detection performance but also reduces the computational overhead.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, Zunair and Hamza [18] employed CycleGAN, a GAN model consisting of dual-generator and discriminator modules, to effectively increase the representation of the minority class in the dataset [21]. On the other hand, researchers such as Datta et al [19] and Lan et al [22] opted for alternative transformation methods, adjusting image rotation and focus to diversify the dataset without resorting to GANs. While these research papers used different methods to address the class imbalance issue, they did not propose any solutions to handle the low inter-class variation in the generated synthetic datasets.…”
Section: Related Workmentioning
confidence: 99%