The human body’s major organ is the skin, and it protects human beings from the outside environment. Detecting skin disease at an earlier stage is a big challenge because of the similar appearance of skin disease. Although skilled dermatologists find it challenging to forecast skin lesions due to lack of contrast between adjoining tissues. Therefore, there is a need for an automated system that can detect skin lesions timely and precisely. Recently Deep Learning (DL) has attained outstanding success in the diagnosis of various diseases. Thus, in this paper, a transfer learning-based model has been proposed with help of pre-trained Xception model. The Xception model was modified by adding layers such as one pooling layer, two dense layers and one dropout layer. A new Fully Connected (FC) layer changed the original Fully Connected (FC) layer with seven skin disease classes. The proposed model has been evaluated on a HAM10000 dataset with large class imbalances. The data augmentation techniques were applied to overcome the unbalancing in the dataset. The new results showed that the model has attained an accuracy of 96.40% for classifying skin diseases. The proposed model is working best on Benign Keratosis and the values of precision, sensitivity and F1 score are 99%, 97% and 0.98 respectively. This method can provide patients and doctors with a good notion of whether or not medical assistance is required, thus, avoiding undue stress and false alarms.
Dermoscopy images can be classified more accurately if skin lesions or nodules are segmented. Because of their fuzzy borders, irregular boundaries, inter- and intra-class variances, and so on, nodule segmentation is a difficult task. For the segmentation of skin lesions from dermoscopic pictures, several algorithms have been developed. However, their accuracy lags well behind the industry standard. In this paper, a modified U-Net architecture is proposed by modifying the feature map’s dimension for an accurate and automatic segmentation of dermoscopic images. Apart from this, more kernels to the feature map allowed for a more precise extraction of the nodule. We evaluated the effectiveness of the proposed model by considering several hyper parameters such as epochs, batch size, and the types of optimizers, testing it with augmentation techniques implemented to enhance the amount of photos available in the PH2 dataset. The best performance achieved by the proposed model is with an Adam optimizer using a batch size of 8 and 75 epochs.
Brain tumor diagnosis at an early stage can improve the chances of successful treatment and better patient outcomes. In the biomedical industry, non-invasive diagnostic procedures, such as magnetic resonance imaging (MRI), can be used to diagnose brain tumors. Deep learning, a type of artificial intelligence, can analyze MRI images in a matter of seconds, reducing the time it takes for diagnosis and potentially improving patient outcomes. Furthermore, an ensemble model can help increase the accuracy of classification by combining the strengths of multiple models and compensating for their individual weaknesses. Therefore, in this research, a weighted average ensemble deep learning model is proposed for the classification of brain tumors. For the weighted ensemble classification model, three different feature spaces are taken from the transfer learning VGG19 model, Convolution Neural Network (CNN) model without augmentation, and CNN model with augmentation. These three feature spaces are ensembled with the best combination of weights, i.e., weight1, weight2, and weight3 by using grid search. The dataset used for simulation is taken from The Cancer Genome Atlas (TCGA), having a lower-grade glioma collection with 3929 MRI images of 110 patients. The ensemble model helps reduce overfitting by combining multiple models that have learned different aspects of the data. The proposed ensemble model outperforms the three individual models for detecting brain tumors in terms of accuracy, precision, and F1-score. Therefore, the proposed model can act as a second opinion tool for radiologists to diagnose the tumor from MRI images of the brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.