Pneumonia screening is one of the most crucial steps in the pneumonia diagnosing system, which can improve the work efficiency of the radiologists and prevent delayed treatments. In this paper, we propose a deep regression framework for automatic pneumonia screening, which jointly learns the multi-channel images and multi-modal information (i.e., clinical chief complaints, age, and gender) to simulate the clinical pneumonia screening process. We demonstrate the advantages of the framework in three ways. First, visual features from multi-channel images (Lung Window Images, High Attenuation Images, Low Attenuation Images) can provide more visual features than single image channel, and improve the ability of screening pneumonia with severe diseases. Second, the proposed framework treats chest CT scans as short video frames and analyzes them by using Recurrent Convolutional Neural Network, which can automatically extract multiple image features from multi-channel image slices. Third, chief complaints and demographic information can provide valuable prior knowledge enhancing the features from images and further promote performance. The proposed framework has been extensively validated in 900 clinical cases. Compared to the baseline, the proposed framework improves the accuracy by 2.3% and significantly improves the sensitivity by 3.1%. To the best of our knowledge, we are the first to screen pneumonia using multi-channel images, multi-modal demographic and clinical information based on the large scale clinical raw dataset. INDEX TERMS Computed tomography, clinical diagnosis, biomedical imaging, pneumonia screening.
In this paper, we propose a novel feature decoupling method to tackle two critical problems in the lung nodule segmentation task: (i) ambiguity of nodule boundary leads to the imprecise segmentation boundary and (ii) the high false positive rate of segmentation result. Our motivation is that an accurate segmentation network needs explicitly modeling the nodule boundary and texture information, and suppressing the noise information. To do so, a novel Deep Feature Decoupling Module (DFDM) is proposed to decouple the nodule boundary, noise, and texture information from the original feature maps. The decoupled boundary and texture information is used to benefit the segmentation, and the noise information is removed from the input features to reduce the false positive rate. The proposed DFDM consists of three parallel branches, including Boundary Sensitive Branch (BSB), Noise Removal Branch (NRB), and Texture Preserving Branch (TPB) to decouple the mentioned three information, respectively. In particular, we design our BSB with a novel architecture to effectively capture the boundary information of lung nodules. We apply the proposed DFDM to the U-Net architecture and achieve convincing segmentation results on the LIDC-IDRI dataset. Code and models are available at https://github.com/chinichenw/DFDM.
Many medical images domains suffer from inherent ambiguities. A feasible approach to resolve the ambiguity of lung nodule in the segmentation task is to learn a distribution over segmentations based on a given 2D lung nodule image. Whereas lung nodule with 3D structure contains dense 3D spatial information, which is obviously helpful for resolving the ambiguity of lung nodule, but so far no one has studied it.To this end we propose a probabilistic generative segmentation model consisting of a V-Net and a conditional variational autoencoder. The proposed model obtains the 3D spatial information of lung nodule with V-Net to learn a density model over segmentations. It is capable of efficiently producing multiple plausible semantic lung nodule segmentation hypotheses to assist radiologists in making further diagnosis to resolve the present ambiguity. We evaluate our method on publicly available LIDC-IDRI dataset and achieves a new state-of-theart result with 0.231±0.005 in D 2 GED . This result demonstrates the effectiveness and importance of leveraging the 3D spatial information of lung nodule for such problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.