The global pandemic of coronavirus disease 2019 (COVID-19) has resulted in an increased demand for testing, diagnosis, and treatment. Reverse transcription polymerase chain reaction (RT-PCR) is the definitive test for the diagnosis of COVID-19; however, chest X-ray radiography (CXR) is a fast, effective, and affordable test that identifies the possible COVID-19-related pneumonia. This study investigates the feasibility of using a deep learning-based decision-tree classifier for detecting COVID-19 from CXR images. The proposed classifier comprises three binary decision trees, each trained by a deep learning model with convolution neural network based on the PyTorch frame. The first decision tree classifies the CXR images as normal or abnormal. The second tree identifies the abnormal images that contain signs of tuberculosis, whereas the third does the same for COVID-19. The accuracies of the first and second decision trees are 98 and 80%, respectively, whereas the average accuracy of the third decision tree is 95%. The proposed deep learning-based decision-tree classifier may be used in pre-screening patients to conduct triage and fast-track decision making before RT-PCR results are available.
Multi-modality imaging constitutes a foundation of precision medicine, especially in oncology where reliable and rapid imaging techniques are needed in order to insure adequate diagnosis and treatment. In cervical cancer, precision oncology requires the acquisition of
18
F-labeled 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), magnetic resonance (MR), and computed tomography (CT) images. Thereafter, images are co-registered to derive electron density attributes required for FDG-PET attenuation correction and radiation therapy planning. Nevertheless, this traditional approach is subject to MR-CT registration defects, expands treatment expenses, and increases the patient’s radiation exposure. To overcome these disadvantages, we propose a new framework for cross-modality image synthesis which we apply on MR-CT image translation for cervical cancer diagnosis and treatment. The framework is based on a conditional generative adversarial network (cGAN) and illustrates a novel tactic that addresses, simplistically but efficiently, the paradigm of vanishing gradient vs. feature extraction in deep learning. Its contributions are summarized as follows: 1) The approach –termed sU-cGAN-uses, for the first time, a shallow U-Net (sU-Net) with an encoder/decoder depth of 2 as generator; 2) sU-cGAN’s input is the same MR sequence that is used for radiological diagnosis, i.e. T2-weighted, Turbo Spin Echo Single Shot (TSE-SSH) MR images; 3) Despite limited training data and a single input channel approach, sU-cGAN outperforms other state of the art deep learning methods and enables accurate synthetic CT (sCT) generation. In conclusion, the suggested framework should be studied further in the clinical settings. Moreover, the sU-Net model is worth exploring in other computer vision tasks.
Light detection and ranging (LiDAR) sensors are becoming available on modern mobile devices and provide a 3D sensing capability. This new capability is beneficial for perceptions in various use cases, but it is challenging for resourceconstrained mobile devices to use the perceptions in real-time because of their high computational complexity. In this context, edge computing can be used to enable LiDAR online perceptions, but offloading the perceptions on the edge server requires a lowlatency, lightweight, and efficient compression due to the large volume of LiDAR point clouds data.This paper presents FLiCR, a fast and lightweight LiDAR point cloud compression method for enabling edge-assisted online perceptions. FLiCR is based on range images (RI) as an intermediate representation (IR), and dictionary coding for compressing RIs. FLiCR achieves its benefits by leveraging lossy RIs, and we show the efficiency of bytestream compression is largely improved with quantization and subsampling. In addition, we identify the limitation of current quality metrics for presenting the entropy of a point cloud, and introduce a new metric that reflects both point-wise and entropy-wise qualities for lossy IRs. The evaluation results show FLiCR is more suitable for edge-assisted real-time perceptions than the existing LiDAR compressions, and we demonstrate the effectiveness of our compression and metric with the evaluations on 3D object detection and LiDAR SLAM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.