This copy is for personal use only. To order printed copies, contact reprints@rsna.org I n P r e s s Abbreviations: AUC = area under the receiver operating characteristic curve CI = confidence interval COVID-19 = coronavirus disease 2019 COVNet = COVID-19 detection neural network CAP = community acquired pneumonia DICOM = digital imaging and communications in medicine Key Results:A deep learning method was able to identify COVID-19 on chest CT exams (area under the receiver operating characteristic curve, 0.96).A deep learning method to identify community acquired pneumonia on chest CT exams (area under the receiver operating characteristic curve, 0.95).There is overlap in the chest CT imaging findings of all viral pneumonias with other chest diseases that encourages a multidisciplinary approach to the final diagnosis used for patient treatment. Summary Statement:Deep learning detects coronavirus disease 2019 (COVID-19) and distinguish it from community acquired pneumonia and other non-pneumonic lung diseases using chest CT. I n P r e s s Abstract:Background: Coronavirus disease has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT.Purpose: To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performances. Materials and Methods:In this retrospective and multi-center study, a deep learning model, COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT exams for the detection of COVID-19. Community acquired pneumonia (CAP) and other non-pneumonia CT exams were included to test the robustness of the model. The datasets were collected from 6 hospitals between August 2016 and February 2020. Diagnostic performance was assessed by the area under the receiver operating characteristic curve (AUC), sensitivity and specificity. Results:The collected dataset consisted of 4356 chest CT exams from 3,322 patients. The average age is 49±15 years and there were slightly more male patients than female (1838 vs 1484; p-value=0.29). The per-exam sensitivity and specificity for detecting COVID-19 in the independent test set was 114 of 127 (90% [95% CI: 83%, 94%]) and 294 of 307 (96% [95% CI: 93%, 98%]), respectively, with an AUC of 0.96 (p-value<0.001). The per-exam sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175) and 92% (239 of 259), respectively, with an AUC of 0.95 (95% CI: 0.93, 0.97). Conclusions:A deep learning model can accurately detect COVID-19 and differentiate it from community acquired pneumonia and other lung diseases.
An outbreak of a novel coronavirus disease (i.e., COVID-19) has been recorded in Wuhan, China since late December 2019, which subsequently became pandemic around the world. Although COVID-19 is an acutely treated disease, it can also be fatal with a risk of fatality of 4.03% in China and the highest of 13.04% in Algeria and 12.67% Italy (as of 8th April 2020). The onset of serious illness may result in death as a consequence of substantial alveolar damage and progressive respiratory failure. Although laboratory testing, e.g., using reverse transcription polymerase chain reaction (RT-PCR), is the golden standard for clinical diagnosis, the tests may produce false negatives. Moreover, under the pandemic situation, shortage of RT-PCR testing resources may also delay the following clinical decision and treatment. Under such circumstances, chest CT imaging has become a valuable tool for both diagnosis and prognosis of COVID-19 patients. In this study, we propose a weakly supervised deep learning strategy for detecting and classifying COVID-19 infection from CT images. The proposed method can minimise the requirements of manual labelling of CT images but still be able to obtain accurate infection detection and distinguish COVID-19 from non-COVID-19 cases. Based on the promising results obtained qualitatively and quantitatively, we can envisage a wide deployment of our developed technique in large-scale clinical studies.
Expression microarrays are widely used for investigating the candidate molecular targets in human cancer. While genome-wide expression signatures screened by gene set enrichment analysis (GSEA) were not performed in Chinese gastric cancer (GC). To gain new molecular targets for GC, GSEA analysis was performed. In the present study, GSEA were used to pick out differentially expressed gene sets of our database. Total RNA of paired tissue samples (n = 48) and a tissue microarray containing 132 paired tissues were used to further validate expression levels of INHBA and its correction with clinicopathological factors. Upregulated INHBA expression in gastric cancer was screened and further confirmed by qPCR and immunostaining analysis. Increased INHBA expression was significantly correlated with the diameter of cancer and depth of tumor invasion. Patients with higher expression levels of INHBA had a shorter disease-free survival rate. It was effective to gain new molecular targets for GC by GSEA analysis. INHBA may be a poor survival indicator of GC.
ObjectivesTo evaluate the performance of a novel three-dimensional (3D) joint convolutional and recurrent neural network (CNN-RNN) for the detection of intracranial hemorrhage (ICH) and its five subtypes (cerebral parenchymal, intraventricular, subdural, epidural, and subarachnoid) in non-contrast head CT.MethodsA total of 2836 subjects (ICH/normal, 1836/1000) from three institutions were included in this ethically approved retrospective study, with a total of 76,621 slices from non-contrast head CT scans. ICH and its five subtypes were annotated by three independent experienced radiologists, with majority voting as reference standard for both the subject level and the slice level. Ninety percent of data was used for training and validation, and the rest 10% for final evaluation. A joint CNN-RNN classification framework was proposed, with the flexibility to train when subject-level or slice-level labels are available. The predictions were compared with the interpretations from three junior radiology trainees and an additional senior radiologist.ResultsIt took our algorithm less than 30 s on average to process a 3D CT scan. For the two-type classification task (predicting bleeding or not), our algorithm achieved excellent values (≥ 0.98) across all reporting metrics on the subject level. For the five-type classification task (predicting five subtypes), our algorithm achieved > 0.8 AUC across all subtypes. The performance of our algorithm was generally superior to the average performance of the junior radiology trainees for both two-type and five-type classification tasks.ConclusionsThe proposed method was able to accurately detect ICH and its subtypes with fast speed, suggesting its potential for assisting radiologists and physicians in their clinical diagnosis workflow.Key Points • A 3D joint CNN-RNN deep learning framework was developed for ICH detection and subtype classification, which has the flexibility to train with either subject-level labels or slice-level labels. • This deep learning framework is fast and accurate at detecting ICH and its subtypes. • The performance of the automated algorithm was superior to the average performance of three junior radiology trainees in this work, suggesting its potential to reduce initial misinterpretations. Electronic supplementary materialThe online version of this article (10.1007/s00330-019-06163-2) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.