Purpose Lunit INSIGHT CXR (Lunit) is a commercially available deep-learning algorithm-based decision support system for chest radiography (CXR). This retrospective study aimed to evaluate the concordance rate of radiologists and Lunit for thoracic abnormalities in a multicenter health screening cohort. Methods and materials We retrospectively evaluated the radiology reports and Lunit results for CXR at several health screening centers in August 2020. Lunit was adopted as a clinical decision support system (CDSS) in routine clinical practice. Subsequently, radiologists completed their reports after reviewing the Lunit results. The DLA result was provided as a color map with an abnormality score (%) for thoracic lesions when the score was greater than the predefined cutoff value of 15%. Concordance was achieved when (a) the radiology reports were consistent with the DLA results (“accept”), (b) the radiology reports were partially consistent with the DLA results (“edit”) or had additional lesions compared with the DLA results (“add”). There was discordance when the DLA results were rejected in the radiology report. In addition, we compared the reading times before and after Lunit was introduced. Finally, we evaluated systemic usability scale questionnaire for radiologists and physicians who had experienced Lunit. Results Among 3,113 participants (1,157 men; mean age, 49 years), thoracic abnormalities were found in 343 (11.0%) based on the CXR radiology reports and 621 (20.1%) based on the Lunit results. The concordance rate was 86.8% (accept: 85.3%, edit: 0.9%, and add: 0.6%), and the discordance rate was 13.2%. Except for 479 cases (7.5%) for whom reading time data were unavailable (n = 5) or unreliable (n = 474), the median reading time increased after the clinical integration of Lunit (median, 19s vs. 14s, P < 0.001). Conclusion The real-world multicenter health screening cohort showed a high concordance of the chest X-ray report and the Lunit result under the clinical integration of the deep-learning solution. The reading time slight increased with the Lunit assistance.
Background: The coronavirus disease 2019 (COVID-19) pandemic has threatened public health. Medical imaging tools such as chest X-ray and computed tomography (CT) play an essential role in the global fight against COVID-19. Recently emerging artificial intelligence (AI) technologies further strengthen the power of imaging tools and help medical professionals. We reviewed the current progress in the development of AI technologies for the diagnostic imaging of COVID-19.Current Concepts: The rapid development of AI, including deep learning, has led to the development of technologies that may assist in the diagnosis and treatment of diseases, prediction of disease risk and prognosis, health index monitoring, and drug development. In the era of the COVID-19 pandemic, AI can improve work efficiency through accurate delineation of infections on chest X-ray and CT images, differentiation of COVID-19 from other diseases, and facilitation of subsequent disease quantification. Moreover, computer-aided platforms help radiologists make clinical decisions for disease diagnosis, tracking, and prognosis.Discussion and Conclusion: We reviewed the current progress in AI technology for chest imaging for COVID-19. However, it is necessary to combine clinical experts’ observations, medical image data, and clinical and laboratory findings for reliable and efficient diagnosis and management of COVID-19. Future AI research should focus on multimodality-based models and how to select the best model architecture for COVID-19 diagnosis and management.
Background: Labeling error may restrict radiography-based deep learning algorithms in screening lung cancer using chest radiography. Physicians also need precise location information for small nodules. We hypothesized that a deep learning approach using chest radiography data with pixel-level labels referencing computed tomography enhances nodule detection and localization compared to a data with only image-level labels. Methods: National Institute Health dataset, chest radiograph-based labeling dataset, and AI-HUB dataset, computed tomography-based labeling dataset were used. As a deep learning algorithm, we employed Densenet with Squeeze-and-Excitation blocks. We constructed four models to examine whether labeling based on chest computed tomography versus chest X-ray and pixel-level labeling versus image-level labeling improves the performance of deep learning in nodule detection. Using two external datasets, models were evaluated and compared. Results: Externally validated, the model trained with AI-HUB data (area under curve [AUC] 0.88 and 0.78) outperformed the model trained with NIH (AUC 0.71 and 0.73). In external datasets, the model trained with pixel-level AI-HUB data performed the best (AUC 0.91 and 0.86). In terms of nodule localization, the model trained with AI-HUB data annotated at the pixel level demonstrated dice coefficient greater than 0.60 across all validation datasets, outperforming models trained with image-level annotation data, whose dice coefficient ranged from 0.36-0.58. Conclusion: Our findings imply that precise labeled data are required for constructing robust and reliable deep learning nodule detection models on chest radiograph. In addition, it is anticipated that the deep learning model trained with pixel-level data will provide nodule location information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.