This copy is for personal use only. To order printed copies, contact reprints@rsna.org I n P r e s s Abbreviations: AUC = area under the receiver operating characteristic curve CI = confidence interval COVID-19 = coronavirus disease 2019 COVNet = COVID-19 detection neural network CAP = community acquired pneumonia DICOM = digital imaging and communications in medicine Key Results:A deep learning method was able to identify COVID-19 on chest CT exams (area under the receiver operating characteristic curve, 0.96).A deep learning method to identify community acquired pneumonia on chest CT exams (area under the receiver operating characteristic curve, 0.95).There is overlap in the chest CT imaging findings of all viral pneumonias with other chest diseases that encourages a multidisciplinary approach to the final diagnosis used for patient treatment. Summary Statement:Deep learning detects coronavirus disease 2019 (COVID-19) and distinguish it from community acquired pneumonia and other non-pneumonic lung diseases using chest CT. I n P r e s s Abstract:Background: Coronavirus disease has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT.Purpose: To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performances. Materials and Methods:In this retrospective and multi-center study, a deep learning model, COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT exams for the detection of COVID-19. Community acquired pneumonia (CAP) and other non-pneumonia CT exams were included to test the robustness of the model. The datasets were collected from 6 hospitals between August 2016 and February 2020. Diagnostic performance was assessed by the area under the receiver operating characteristic curve (AUC), sensitivity and specificity. Results:The collected dataset consisted of 4356 chest CT exams from 3,322 patients. The average age is 49±15 years and there were slightly more male patients than female (1838 vs 1484; p-value=0.29). The per-exam sensitivity and specificity for detecting COVID-19 in the independent test set was 114 of 127 (90% [95% CI: 83%, 94%]) and 294 of 307 (96% [95% CI: 93%, 98%]), respectively, with an AUC of 0.96 (p-value<0.001). The per-exam sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175) and 92% (239 of 259), respectively, with an AUC of 0.95 (95% CI: 0.93, 0.97). Conclusions:A deep learning model can accurately detect COVID-19 and differentiate it from community acquired pneumonia and other lung diseases.
The authors propose a nonrigid image registration approach to align two computed-tomography (CT)-derived lung datasets acquired during breath-holds at two inspiratory levels when the image distortion between the two volumes is large. The goal is to derive a three-dimensional warping function that can be used in association with computational fluid dynamics studies. In contrast to the sum of squared intensity difference (SSD), a new similarity criterion, the sum of squared tissue volume difference (SSTVD), is introduced to take into account changes in reconstructed Hounsfield units (scaled attenuation coefficient, HU) with inflation. This new criterion aims to minimize the local tissue volume difference within the lungs between matched regions, thus preserving the tissue mass of the lungs if the tissue density is assumed to be relatively constant. The local tissue volume difference is contributed by two factors: Change in the regional volume due to the deformation and change in the fractional tissue content in a region due to inflation. The change in the regional volume is calculated from the Jacobian value derived from the warping function and the change in the fractional tissue content is estimated from reconstructed HU based on quantitative CT measures. A composite of multilevel B-spline is adopted to deform images and a sufficient condition is imposed to ensure a one-to-one mapping even for a registration pair with large volume difference. Parameters of the transformation model are optimized by a limited-memory quasi-Newton minimization approach in a multiresolution framework. To evaluate the effectiveness of the new similarity measure, the authors performed registrations for six lung volume pairs. Over 100 annotated landmarks located at vessel bifurcations were generated using a semiautomatic system. The results show that the SSTVD method yields smaller average landmark errors than the SSD method across all six registration pairs.
ObjectivesTo evaluate the performance of a novel three-dimensional (3D) joint convolutional and recurrent neural network (CNN-RNN) for the detection of intracranial hemorrhage (ICH) and its five subtypes (cerebral parenchymal, intraventricular, subdural, epidural, and subarachnoid) in non-contrast head CT.MethodsA total of 2836 subjects (ICH/normal, 1836/1000) from three institutions were included in this ethically approved retrospective study, with a total of 76,621 slices from non-contrast head CT scans. ICH and its five subtypes were annotated by three independent experienced radiologists, with majority voting as reference standard for both the subject level and the slice level. Ninety percent of data was used for training and validation, and the rest 10% for final evaluation. A joint CNN-RNN classification framework was proposed, with the flexibility to train when subject-level or slice-level labels are available. The predictions were compared with the interpretations from three junior radiology trainees and an additional senior radiologist.ResultsIt took our algorithm less than 30 s on average to process a 3D CT scan. For the two-type classification task (predicting bleeding or not), our algorithm achieved excellent values (≥ 0.98) across all reporting metrics on the subject level. For the five-type classification task (predicting five subtypes), our algorithm achieved > 0.8 AUC across all subtypes. The performance of our algorithm was generally superior to the average performance of the junior radiology trainees for both two-type and five-type classification tasks.ConclusionsThe proposed method was able to accurately detect ICH and its subtypes with fast speed, suggesting its potential for assisting radiologists and physicians in their clinical diagnosis workflow.Key Points • A 3D joint CNN-RNN deep learning framework was developed for ICH detection and subtype classification, which has the flexibility to train with either subject-level labels or slice-level labels. • This deep learning framework is fast and accurate at detecting ICH and its subtypes. • The performance of the automated algorithm was superior to the average performance of three junior radiology trainees in this work, suggesting its potential to reduce initial misinterpretations. Electronic supplementary materialThe online version of this article (10.1007/s00330-019-06163-2) contains supplementary material, which is available to authorized users.
We present a novel image-based technique to estimate a subject-specific boundary condition (BC) for computational fluid dynamics (CFD) simulation of pulmonary air flow. The information of regional ventilation for an individual is derived by registering two computed tomography (CT) lung datasets and then passed to the CT-resolved airways as the flow BC. The CFD simulations show that the proposed method predicts lobar volume changes consistent with direct imagemeasured metrics, whereas the other two traditional BCs (uniform velocity or uniform pressure) yield lobar volume changes and regional pressure differences inconsistent with observed physiology.
Rationale: Smoking-related microvascular loss causes end-organ damage in the kidneys, heart, and brain. Basic research suggests a similar process in the lungs, but no large studies have assessed pulmonary microvascular blood flow (PMBF) in early chronic lung disease.Objectives: To investigate whether PMBF is reduced in mild as well as more severe chronic obstructive pulmonary disease (COPD) and emphysema.Methods: PMBF was measured using gadolinium-enhanced magnetic resonance imaging (MRI) among smokers with COPD and control subjects age 50 to 79 years without clinical cardiovascular disease. COPD severity was defined by standard criteria. Emphysema on computed tomography (CT) was defined by the percentage of lung regions below 2950 Hounsfield units (2950 HU) and by radiologists using a standard protocol. We adjusted for potential confounders, including smoking, oxygenation, and left ventricular cardiac output.Measurements and Main Results: Among 144 participants, PMBF was reduced by 30% in mild COPD, by 29% in moderate COPD, and by 52% in severe COPD (all P , 0.01 vs. control subjects). PMBF was reduced with greater percentage emphysema 2950HU and radiologist-defined emphysema, particularly panlobular and centrilobular emphysema (all P < 0.01). Registration of MRI and CT images revealed that PMBF was reduced in mild COPD in both nonemphysematous and emphysematous lung regions. Associations for PMBF were independent of measures of small airways disease on CT and gas trapping largely because emphysema and small airways disease occurred in different smokers.Conclusions: PMBF was reduced in mild COPD, including in regions of lung without frank emphysema, and may represent a distinct pathological process from small airways disease. PMBF may provide an imaging biomarker for therapeutic strategies targeting the pulmonary microvasculature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.