Skin cancer is one of the most threatening cancers, which spreads to the other parts of the body if not caught and treated early. During the last few years, the integration of deep learning into skin cancer has been a milestone in health care, and dermoscopic images are right at the center of this revolution. This review study focuses on the state-of-the-art automatic diagnosis of skin cancer from dermoscopic images based on deep learning. This work thoroughly explores the existing deep learning and its application in diagnosing dermoscopic images. This study aims to present and summarize the latest methodology in melanoma classification and the techniques to improve this. We discuss advancements in deep learningbased solutions to diagnose skin cancer, along with some challenges and future opportunities to strengthen these automatic systems to support dermatologists and enhance their ability to diagnose skin cancer.
INDEX TERMSSkin cancer, Dermoscopy images, Deep learning, Classification, Literature review.FIGURE 1: Global heat map showing estimated agestandardized incidence rates, in 2020, of melanoma of the skin in all sexes, all ages. The map shows melanoma incidence in all parts of the world, except Greenland in the Arctic Circle. The regions most affected by skin melanoma globally are Europe, the United States, Canada, and Australia [2].microscopy-based tool to improve non-invasive diagnostic discrimination of skin lesions based on color and structure analysis [5]. This paper focuses on dermoscopy images. Because dermoscopic structures have direct histopathologic correlates, dermoscopic images help the dermatologist select management and treatment options for particular types of skin cancers [6]. In addition, dermoscopy can be useful for
Skin cancers are the most cancers diagnosed worldwide, with an estimated > 1.5 million new cases in 2020. Use of computer-aided diagnosis (CAD) systems for early detection and classification of skin lesions helps reduce skin cancer mortality rates. Inspired by the success of the transformer network in natural language processing (NLP) and the deep convolutional neural network (DCNN) in computer vision, we propose an end-to-end CNN transformer hybrid model with a focal loss (FL) function to classify skin lesion images. First, the CNN extracts low-level, local feature maps from the dermoscopic images. In the second stage, the vision transformer (ViT) globally models these features, then extracts abstract and high-level semantic information, and finally sends this to the multi-layer perceptron (MLP) head for classification. Based on an evaluation of three different loss functions, the FL-based algorithm is aimed to improve the extreme class imbalance that exists in the International Skin Imaging Collaboration (ISIC) 2018 dataset. The experimental analysis demonstrates that impressive results of skin lesion classification are achieved by employing the hybrid model and FL strategy, which shows significantly high performance and outperforms the existing work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.