With the evolution of deep learning technologies, computer vision-related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out-perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three-stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal-to-noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.
Automated brain segmentation is an active research domain due to the association of various neurological disorders with different regions of the brain, to help medical professionals in prognostics and diagnostics. Traditional techniques like atlas-based and pattern recognition-based methods led to the development of various tools for automated brain segmentation. Recently, deep learning techniques are outperforming classical state-of-the-art methods and gradually becoming more mature. Consequently, deep learning has been extensively employed as a tool for precise segmentation of brain regions because of its capability to learn the intricate features of the high-dimensional data. In this work, a network for the segmentation of multiple brain regions has been proposed that is based on 3D convolutional neural networks and utilizes residual learning and dilated convolution operations to efficiently learn the end-to-end mapping from MRI volumes to the voxel-level brain segments. This research is focused on the segmentation of up to nine brain regions including cerebrospinal fluid, white matter and gray matter as well as their sub-regions. Mean dice scores of 0.879 and 0.914 have been achieved for three and nine brain regions, respectively by using the data from three different sources. Comparative analysis shows that our network gives better dice scores for most of the brain regions than state-of-the-artwork. Moreover, the mean dice score of 0.903, obtained for eight brain regions segmentation with MRBrains18 dataset, is better than 0.876 which was achieved in the previous work.
The coronavirus disease 2019 (COVID-19) pandemic has spread around the globe very rapidly. Previously, the evolution pattern and similarity among the COVID-19 causative organism severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and causative organisms of other similar infections have been determined using a single type of genetic marker in different studies. Herein, the SARS-CoV-2 and related β coronaviruses Middle East respiratory syndrome coronavirus (MERS-CoV), SARS-CoV, bat coronavirus (BAT-CoV) were comprehensively analyzed using a
Acral melanoma (AM) is a rare and lethal type of skin cancer. It can be diagnosed by expert dermatologists, using dermoscopic imaging. It is challenging for dermatologists to diagnose melanoma because of the very minor differences between melanoma and non-melanoma cancers. Most of the research on skin cancer diagnosis is related to the binary classification of lesions into melanoma and non-melanoma. However, to date, limited research has been conducted on the classification of melanoma subtypes. The current study investigated the effectiveness of dermoscopy and deep learning in classifying melanoma subtypes, such as, AM. In this study, we present a novel deep learning model, developed to classify skin cancer. We utilized a dermoscopic image dataset from the Yonsei University Health System South Korea for the classification of skin lesions. Various image processing and data augmentation techniques have been applied to develop a robust automated system for AM detection. Our custom-built model is a seven-layered deep convolutional network that was trained from scratch. Additionally, transfer learning was utilized to compare the performance of our model, where AlexNet and ResNet-18 were modified, fine-tuned, and trained on the same dataset. We achieved improved results from our proposed model with an accuracy of more than 90 % for AM and benign nevus, respectively. Additionally, using the transfer learning approach, we achieved an average accuracy of nearly 97 %, which is comparable to that of state-of-the-art methods. From our analysis and results, we found that our model performed well and was able to effectively classify skin cancer. Our results show that the proposed system can be used by dermatologists in the clinical decision-making process for the early diagnosis of AM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.