Melanoma is the deadliest form of skin cancer. Distinguishing melanoma lesions from nonmelanoma lesions has however been a challenging task. Many Computer Aided Diagnosis and Detection Systems have been developed in the past for this task. They have been limited in performance due to the complex visual characteristics of the skin lesion images which consists of inhomogeneous features and fuzzy boundaries. In this paper, we propose a deep learning-based method that overcomes these limitations for automatic melanoma lesion detection and segmentation. An enhanced encoder-decoder network with encoder and decoder sub-networks connected through a series of skip pathways which brings the semantic level of the encoder feature maps closer to that of the decoder feature maps is proposed for efficient learning and feature extraction. The system employs multi-stage and multi-scale approach and utilizes softmax classifier for pixel-wise classification of melanoma lesions. We devise a new method called Lesion-classifier that performs the classification of skin lesions into melanoma and non-melanoma based on results derived from pixel-wise classification. Our experiments on two well-established public benchmark skin lesion datasets, International Symposium on Biomedical Imaging(ISBI)2017 and Hospital Pedro Hispano (PH2), demonstrate that our method is more effective than some state-of-the-art methods. We achieved accuracy and dice coefficient of 95% and 92% on ISIC 2017 dataset and accuracy and dice coefficient of 95% and 93% on PH2 datasets.
Skin Lesion detection and classification are very critical in diagnosing skin malignancy. Existing Deep learning-based Computer-aided diagnosis (CAD) methods still perform poorly on challenging skin lesions with complex features such as fuzzy boundaries, artifacts presence, low contrast with the background and, limited training datasets. They also rely heavily on a suitable turning of millions of parameters which often leads to over-fitting, poor generalization, and heavy consumption of computing resources. This study proposes a new framework that performs both segmentation and classification of skin lesions for automated detection of skin cancer. The proposed framework consists of two stages: the first stage leverages on an encoder-decoder Fully Convolutional Network (FCN) to learn the complex and inhomogeneous skin lesion features with the encoder stage learning the coarse appearance and the decoder learning the lesion borders details. Our FCN is designed with the sub-networks connected through a series of skip pathways that incorporate long skip and shortcut connections unlike, the only long skip connections commonly used in the traditional FCN, for residual learning strategy and effective training. The network also integrates the Conditional Random Field (CRF) module which employs a linear combination of Gaussian kernels for its pairwise edge potentials for contour refinement and lesion boundaries localization. The second stage proposes a novel FCN-based DenseNet framework that is composed of dense blocks that are merged and connected via the concatenation strategy and transition layer. The system also employs hyperparameters optimization techniques to reduce network complexity and improve computing efficiency. This approach encourages feature reuse and thus requires a small number of parameters and effective with limited data. The proposed model was evaluated on publicly available HAM10000 dataset of over 10000 images consisting of 7 different categories of diseases with 98% accuracy, 98.5% recall, and 99% of AUC score respectively.
Localization of region of interest (ROI) is paramount to the analysis of medical images to assist in the identification and detection of diseases. In this research, we explore the application of a deep learning approach in the analysis of some medical images. Traditional methods have been restricted due to the coarse and granulated appearance of most of these images. Recently, deep learning techniques have produced promising results in the segmentation of medical images for the diagnosis of diseases. This research experiments on medical images using a robust deep learning architecture based on the Fully Convolutional Network- (FCN-) UNET method for the segmentation of three samples of medical images such as skin lesion, retinal images, and brain Magnetic Resonance Imaging (MRI) images. The proposed method can efficiently identify the ROI on these images to assist in the diagnosis of diseases such as skin cancer, eye defects and diabetes, and brain tumor. This system was evaluated on publicly available databases such as the International Symposium on Biomedical Imaging (ISBI) skin lesion images, retina images, and brain tumor datasets with over 90% accuracy and dice coefficient.
Purpose Breast cancer remains a serious public health problem that results in the loss of lives among women. However, early detection of its signs increases treatment options and the likelihood of cure. Although mammography has been established to be a proven technique of examining symptoms of cancer in mammograms, the manual observation by radiologists is demanding and often prone to diagnostic errors. Therefore, computer aided diagnosis (CADx) systems could be a viable alternative that could facilitate and ease cancer diagnosis process; hence this study. Methodology The inputs to the proposed model are raw mammograms downloaded from the Mammographic Image Analysis Society database. Prior to the classification, the raw mammograms were preprocessed. Then, gray level co-occurrence matrix was used to extract fifteen textural features from the mammograms at four different angular directions: θ={0°, 45°, 90°, 135°}, and two distances: D={1,2}. Afterwards, a two-stage support vector machine was used to classify the mammograms as normal, benign and malignant. Results All of the 37 normal images used as test data were classified as normal (no false positive) and all 41 abnormal images were correctly classified to be abnormal (no false negative), meaning that the sensitivity and specificity of the model in detecting abnormality is 100%. After the detection of abnormality, the system further classified the abnormality on the mammograms to be either “benign” or “malignant”. Out of 23 benign images, 21 were truly classified as benign. Also, out of 18 malignant images, 17 were truly classified to be malignant. From these findings, the sensitivity, specificity, positive predictive value, and negative predictive value of the system are 94.4%, 91.3%, 89.5%, and 95.5%, respectively. Conclusion This article has further affirmed the prowess of automated CADx systems as a viable tool that could facilitate breast cancer diagnosis by radiologists.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.