Background COVID-19 has spread very rapidly, and it is important to build a system that can detect it in order to help an overwhelmed health care system. Many research studies on chest diseases rely on the strengths of deep learning techniques. Although some of these studies used state-of-the-art techniques and were able to deliver promising results, these techniques are not very useful if they can detect only one type of disease without detecting the others. Objective The main objective of this study was to achieve a fast and more accurate diagnosis of COVID-19. This study proposes a diagnostic technique that classifies COVID-19 x-ray images from normal x-ray images and those specific to 14 other chest diseases. Methods In this paper, we propose a novel, multilevel pipeline, based on deep learning models, to detect COVID-19 along with other chest diseases based on x-ray images. This pipeline reduces the burden of a single network to classify a large number of classes. The deep learning models used in this study were pretrained on the ImageNet dataset, and transfer learning was used for fast training. The lungs and heart were segmented from the whole x-ray images and passed onto the first classifier that checks whether the x-ray is normal, COVID-19 affected, or characteristic of another chest disease. If it is neither a COVID-19 x-ray image nor a normal one, then the second classifier comes into action and classifies the image as one of the other 14 diseases. Results We show how our model uses state-of-the-art deep neural networks to achieve classification accuracy for COVID-19 along with 14 other chest diseases and normal cases based on x-ray images, which is competitive with currently used state-of-the-art models. Due to the lack of data in some classes such as COVID-19, we applied 10-fold cross-validation through the ResNet50 model. Our classification technique thus achieved an average training accuracy of 96.04% and test accuracy of 92.52% for the first level of classification (ie, 3 classes). For the second level of classification (ie, 14 classes), our technique achieved a maximum training accuracy of 88.52% and test accuracy of 66.634% by using ResNet50. We also found that when all the 16 classes were classified at once, the overall accuracy for COVID-19 detection decreased, which in the case of ResNet50 was 88.92% for training data and 71.905% for test data. Conclusions Our proposed pipeline can detect COVID-19 with a higher accuracy along with detecting 14 other chest diseases based on x-ray images. This is achieved by dividing the classification task into multiple steps rather than classifying them collectively.
BACKGROUND: Chest X-ray images are widely used to detect many different lung diseases. However, reading chest X-ray images to accurately detect and classify different lung diseases by doctors is often difficult with large inter-reader variability. Thus, there is a huge demand for developing computer-aided automated schemes of chest X-ray images to help doctors more accurately and efficiently detect lung diseases depicting on chest X-ray images. OBJECTIVE: To develop convolution neural network (CNN) based deep learning models and compare their feasibility and performance to classify 14 chest diseases or pathology patterns based on chest X-rays. METHOD: Several CNN models pre-trained using ImageNet dataset are modified as transfer learning models and applied to classify between 14 different chest pathology and normal chest patterns depicting on chest X-ray images. In this process, a deep convolution generative adversarial network (DC-GAN) is also trained to mitigate the effects of small or imbalanced dataset and generate synthetic images to balance the dataset of different diseases. The classification models are trained and tested using a large dataset involving 91,324 frontal-view chest X-ray images. RESULTS: In this study, eight models are trained and compared. Among them, ResNet-152 model achieves an accuracy of 67% and 62% with and without data augmentation, respectively. Inception-V3, NasNetLarge, Xcaption, ResNet-50 and InceptionResNetV2 achieve accuracy of 68%, 62%, 66%, 66% and 54% respectively. Additionally, Resnet-152 with data augmentation achieves an accuracy of 83% but only for six classes. CONCLUSION: This study solves the problem of having fewer data by using GAN-based techniques to add synthetic images and demonstrates the feasibility of applying transfer learning CNN method to help classify 14 types of chest diseases depicting on chest X-ray images.
Diabetic retinopathy is an eye deficiency that affects retina as a result of the patient having diabetes mellitus caused by high sugar levels, which may eventually lead to macular edema. The objective of this study is to design and compare several deep learning models that detect severity of diabetic retinopathy, determine risk of leading to macular edema, and segment different types of disease patterns using retina images. Indian Diabetic Retinopathy Image Dataset (IDRiD) dataset was used for disease grading and segmentation. Since images of the dataset have different brightness and contrast, we employed three techniques for generating processed images from the original images, which include brightness, color and, contrast (BCC) enhancing, color jitters (CJ), and contrast limited adaptive histogram equalization (CLAHE). After image preporcessing, we used pre-trained ResNet50, VGG16, and VGG19 models on these different preprocessed images both for determining the severity of the retinopathy and also the chances of macular edema. UNet was also applied to segment different types of diseases. To train and test these models, image dataset was divided into training, testing, and validation data at 70%, 20%, and 10% ratios, respectively. During model training, data augmentation method was also applied to increase the number of training images. Study results show that for detecting the severity of retinopathy and macular edema, ResNet50 showed the best accuracy using BCC and original images with an accuracy of 60.2% and 82.5%, respectively, on validation dataset. In segmenting different types of diseases, UNet yielded the highest testing accuracy of 65.22% and 91.09% for microaneurysms and hard exudates using BCC images, 84.83% for optic disc using CJ images, 59.35% and 89.69% for hemorrhages and soft exudates using CLAHE images, respectively. Thus, image preprocessing can play an important role to improve efficacy and performance of deep learning models.
Global environment monitoring is a task that needs attention due to changing climate. This includes monitoring the rate of deforestation and areas affected by flooding. Satellite imaging has helped a lot in effectively monitoring the earth, and deep learning techniques have helped automate this monitoring process. This paper proposes a solution for observing the area covered by the forest and water. To achieve this task UNet model has been proposed, which is an image segmentation model. Our model achieved a validation accuracy of 82.55% and 82.92% for segmentation of areas covered by forest and water, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.