2021
DOI: 10.1007/s11042-021-11120-7
|View full text |Cite
|
Sign up to set email alerts
|

Melanoma localization and classification through faster region-based convolutional neural network and SVM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 48 publications
0
19
0
Order By: Relevance
“…While for the one-stage object detection methods, the position and class of RoI are determined in a single step. In the case of two-stage approaches, we have chosen the Fast-RCNN ( 53 ), Faster-RCNN ( 4 , 54 ), and Mask-RCNN ( 55 ) models, while for the other, we have taken the RetinaNet ( 56 ) and conventional CenterNet ( 21 ) models.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…While for the one-stage object detection methods, the position and class of RoI are determined in a single step. In the case of two-stage approaches, we have chosen the Fast-RCNN ( 53 ), Faster-RCNN ( 4 , 54 ), and Mask-RCNN ( 55 ) models, while for the other, we have taken the RetinaNet ( 56 ) and conventional CenterNet ( 21 ) models.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…At the start, a keypoint extractor unit is used which extracts the reliable feature vector that is employed to estimate the heatmaps (Hms), embeddings, offset, and class (C). The Hms is concerned to give the approximation if a specific location in a U-Net + SVM Categorization ISIC-2016 76% Daghrir et al [23] SIFT + SVM and KNN Categorization ISIC-2017 88.40% Bama et al [24] GMM model Segmentation PH2 86.83% Hu et al [25] SIFT + SVM Categorization PH2 82% Durgarao et al [44] LVP, and LBP + C-means Segmentation PH2 79.44% DL techniques Ameri et al [26] AlexNet Categorization HAM10000 84% Acosta et al [27] ResNet-152 Categorization ISIC-2017 90.40% Zhang et al [28] VGG-16 Categorization ISIC-2017 92.72% Shan et al [29] FC-DPN Segmentation ISIC-2017 95.14% Bi et al [30] Res-FCN Segmentation ISIC-2016 95.78% Adegun et al [31] Encoder-decoder Categorization ISIC-2017 95% Nawaz et al [32] Faster-RCNN + FKM Segmentation PH2 95.6% Nawaz et al [35] Faster-RCNN + SVM Categorization ISIC-2016 89.10% Banerjee et al [36] YOLO + L-type fuzzy clustering Segmentation ISIC-2017 97.33% Iqbal et al [37] CNN Categorization ISIC-2019 88.75% Khan et al [38] Mask-RCNN, DenseNet201 + SVM Segmentation ISIC-2016 93.6% Mohakud et al [39] Encoder-decoder Segmentation ISIC-2016 98.32% Abdar et al [40] Bayesian model Categorization Kaggle skin cancer dataset 88.95% Pacheco et al [41] Metadata and block-based method Categorization ISIC-2019 74.90% Wang et al [42] U-Net Segmentation ISIC-2017 94.67% Zhao et al [43] U-Net++ Segmentation ISIC-2018 95.30% Ali et al [46] DCNN Categorization HAM10000 91.93% 5 Computational and Mathematical Methods in Medicine sample is a TL/BR corner associated with a particular category [51], while the embeddings are used to discriminate the detected pairs of corners and offsets to fine-tune the box position. The corners with high-scored TL and BR coordinates are employed to regulate the exact position of the box, whereas the associated category for each detected diseased region is specified by using the embedding distances on the computed feature vector.…”
Section: Methodsmentioning
confidence: 99%
“…Nawaz et al [35] presented another framework for the automated identification and classification of the skin model via employing the Faster-RCNN model together with the SVM approach. The work elaborated in [35] is robust to melanoma classification because of its empowerment to tackle the model overfitting data. However, the works discussed in [32,35] have highly relied on the selection of hyperparameters in the model training phase.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Data augmentation is a way to expand the training dataset by transforming input images without having to collect new datasets for model training, thus avoiding the overfitting issue that might occur during the training process when a small amount of training data is used. These papers use data augmentation for performance enhancement: [25], [26], [34]- [36], [40], [41], [46], [46], [47], [49], [50], [52], [56], [58], [60], [60], [61], [64], [67], [68], [85], [88], [90], [91], [98], [105], [107], [109], [115], [116], [122], [124]- [127], [130], [151], [152], [154], [155], [158], [159], [162]- [166], [168]- [170], [180]- [186]. The literature includes several w...…”
Section: Data Augmentationmentioning
confidence: 99%