The COVID-19 pandemic caused by the new coronavirus SARS-CoV-2 has changed the world as we know it. An early diagnosis is crucial in order to prevent new outbreaks and control its rapid spread. Medical imaging techniques, such as X-ray or chest computed tomography, are commonly used for this purpose due to their reliability for COVID-19 diagnosis. Computer-aided diagnosis systems could play an essential role in aiding radiologists in the screening process. In this work, a novel Deep Learning-based system, called COVID-XNet, is presented for COVID-19 diagnosis in chest X-ray images. The proposed system performs a set of preprocessing algorithms to the input images for variability reduction and contrast enhancement, which are then fed to a custom Convolutional Neural Network in order to extract relevant features and perform the classification between COVID-19 and normal cases. The system is trained and validated using a 5-fold cross-validation scheme, achieving an average accuracy of 94.43% and an AUC of 0.988. The output of the system can be visualized using Class Activation Maps, highlighting the main findings for COVID-19 in X-ray images. These promising results indicate that COVID-XNet could be used as a tool to aid radiologists and contribute to the fight against COVID-19.
Robotics is an area of research in which the paradigm of Multi-Agent Systems (MAS) can prove to be highly useful. Multi-Agent Systems come in the form of cooperative robots in a team, sensor networks based on mobile robots, and robots in Intelligent Environments, to name but a few. However, the development of Multi-Agent Robotic Systems (MARS) still presents major challenges. Over the past decade, a high number of Robotics Software Frameworks (RSFs) have appeared which propose some solutions to the most recurrent problems in robotics. Some of these frameworks, such as ROS, YARP, OROCOS, ORCA, Open-RTM, and Open-RDK, possess certain characteristics and provide the basic infrastructure necessary for the development of MARS. The contribution of this work is the identification of such characteristics as well as the analysis of these frameworks in comparison with the general-purpose Multi-Agent System Frameworks (MASFs), such as JADE and Mobile-C.
Prostate cancer is currently one of the most commonly-diagnosed types of cancer among males. Although its death rate has dropped in the last decades, it is still a major concern and one of the leading causes of cancer death. Prostate biopsy is a test that confirms or excludes the presence of cancer in the tissue. Samples extracted from biopsies are processed and digitized, obtaining gigapixel-resolution images called wholeslide images, which are analyzed by pathologists. Automated intelligent systems could be useful for helping pathologists in this analysis, reducing fatigue and making the routine process faster. In this work, a novel Deep Learning based computer-aided diagnosis system is presented. This system is able to analyze wholeslide histology images that are first patch-sampled and preprocessed using different filters, including a novel patch-scoring algorithm that removes worthless areas from the tissue. Then, patches are used as input to a custom Convolutional Neural Network, which gives a report showing malignant regions on a heatmap. The impact of applying a stain-normalization process to the patches is also analyzed in order to reduce color variability between different scanners. After training the network with a 3-fold cross-validation method, 99.98% accuracy, 99.98% F1 score and 0.999 AUC are achieved on a separate test set. The computation time needed to obtain the heatmap of a whole-slide image is, on average, around 15 s. Our custom network outperforms other state-of-the-art works in terms of computational complexity for a binary classification task between normal and malignant prostate whole-slide images at patch level.
Glaucoma is a degenerative disease that affects vision, causing damage to the optic nerve that ends in vision loss. The classic techniques to detect it have undergone a great change since the intrusion of machine learning techniques into the processing of eye fundus images. Several works focus on training a convolutional neural network (CNN) by brute force, while others use segmentation and feature extraction techniques to detect glaucoma. In this work, a diagnostic aid tool to detect glaucoma using eye fundus images is developed, trained and tested. It consists of two subsystems that are independently trained and tested, combining their results to improve glaucoma detection. The first subsystem applies machine learning and segmentation techniques to detect optic disc and cup independently, combine them and extract their physical and positional features. The second one applies transfer learning techniques to a pre-trained CNN to detect glaucoma through the analysis of the complete eye fundus images. The results of both systems are combined to discriminate positive cases of glaucoma and improve final detection. The results show that this system achieves a higher classification rate than previous works. The system also provides information on the basis for the proposed diagnosis suggestion that can help the ophthalmologist to accept or modify it. INDEX TERMS Glaucoma, Ensemble networks, Medical diagnostic aids, medical imaging, explainable AI.
Medical images from different clinics are acquired with different instruments and settings. To perform segmentation on these images as a cloud-based service we need to train with multiple datasets to increase the segmentation independency from the source. We also require an efficient and fast segmentation network. In this work these two problems, which are essential for many practical medical imaging applications, are studied. As a segmentation network, U-Net has been selected. U-Net is a class of deep neural networks which have been shown to be effective for medical image segmentation. Many different U-Net implementations have been proposed. With the recent development of tensor processing units (TPU), the execution times of these algorithms can be drastically reduced. This makes them attractive for cloud services. In this paper, we study, using Google's publicly available colab environment, a generalized fully configurable Keras U-Net implementation which uses Google TPU processors for training and prediction. As our application problem, we use the segmentation of Optic Disc and Cup, which can be applied to glaucoma detection. To obtain networks with a good performance, independently of the image acquisition source, we combine multiple publicly available datasets (RIM-One V3, DRISHTI and DRIONS). As a result of this study, we have developed a set of functions that allow the implementation of generalized U-Nets adapted to TPU execution and are suitable for cloud-based service implementation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.