2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00330
|View full text |Cite
|
Sign up to set email alerts
|

Towards Automated Melanoma Detection With Deep Learning: Data Purification and Augmentation

Abstract: Melanoma is one of ten most common cancers in the US. Early detection is crucial for survival, but often the cancer is diagnosed in the fatal stage. Deep learning has the potential to improve cancer detection rates, but its applicability to melanoma detection is compromised by the limitations of the available skin lesion data bases, which are small, heavily imbalanced, and contain images with occlusions. We build deep-learning-based tools for data purification and augmentation to counter-act these limitations.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0
6

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 106 publications
(41 citation statements)
references
References 26 publications
0
35
0
6
Order By: Relevance
“…These include technical variations (e.g., camera hardware and software) and differences in image acquisition and quality (e.g., zoom level, focus, lighting, and presence of hair). For example, the presence of surgical ink markings is associated with decreased specificity (Winkler et al, 2019), field of view can significantly affect prediction quality (Mishra et al, 2019), and classification performance improves when hair and rulers are removed (Bisla et al, 2019). We have developed a method to measure how model predictions might be biased by the presence of a visual artifact (e.g., ink) and proposed methods to reduce such biases (Pfau et al, 2019).…”
Section: Considerations Surrounding Clinical Adoptionmentioning
confidence: 99%
“…These include technical variations (e.g., camera hardware and software) and differences in image acquisition and quality (e.g., zoom level, focus, lighting, and presence of hair). For example, the presence of surgical ink markings is associated with decreased specificity (Winkler et al, 2019), field of view can significantly affect prediction quality (Mishra et al, 2019), and classification performance improves when hair and rulers are removed (Bisla et al, 2019). We have developed a method to measure how model predictions might be biased by the presence of a visual artifact (e.g., ink) and proposed methods to reduce such biases (Pfau et al, 2019).…”
Section: Considerations Surrounding Clinical Adoptionmentioning
confidence: 99%
“…The model was evaluated on ISBI2017 dataset, achieving a Dice coefficient of 86.70% and an IoU score of 78.50%. Moreover, Bisla et al [19] introduced the Deep Convolutional Generative Adversarial Network (DCGAN) and ResNet-50 models to jointly segment the skin lesion and classify the lesions into benign and malignant. They exploited pre-processing steps to suppress the artifacts from the skin images.…”
Section: Gan-based Methodsmentioning
confidence: 99%
“…The transfer learning was applied to AlexNet to identify skin lesions in addition to fine-tuning and data increase. An automated system for melanoma detection has been developed by Bisla et al [40] which counter the limitation of datasets. Moreover, proposed method relies heavily on the processing unit to eliminate image occlusions and the unit for data generation for skin lesion classification A classification method proposed by Aldwgeri et al [48] uses CNN and transfer learning to enhance skin classification.…”
Section: A) Pre-trained Modelsmentioning
confidence: 99%