Recent advances in deep learning (DL) (4,5) have led to several radiologic applications (6), specifically Background: Digital subtraction angiography (DSA) generates an image by subtracting a mask image from a dynamic angiogram. However, patient movement-caused misregistration artifacts can result in unclear DSA images that interrupt procedures. Purpose:To train and to validate a deep learning (DL)-based model to produce DSA-like cerebral angiograms directly from dynamic angiograms and then quantitatively and visually evaluate these angiograms for clinical usefulness. Materials and Methods:A retrospective model development and validation study was conducted on dynamic and DSA image pairs consecutively collected from January 2019 through April 2019. Angiograms showing misregistration were first separated per patient by two radiologists and sorted into the misregistration test data set. Nonmisregistration angiograms were divided into development and external test data sets at a ratio of 8:1 per patient. The development data set was divided into training and validation data sets at ratio of 3:1 per patient. The DL model was created by using the training data set, tuned with the validation data set, and then evaluated quantitatively with the external test data set and visually with the misregistration test data set. Quantitative evaluations used the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) with mixed liner models. Visual evaluation was conducted by using a numerical rating scale. Results:The training, validation, nonmisregistration test, and misregistration test data sets included 10 751, 2784, 1346, and 711 paired images collected from 40 patients (mean age, 62 years 11 [standard deviation]; 33 women). In the quantitative evaluation, DL-generated angiograms showed a mean PSNR value of 40.2 dB 4.05 and a mean SSIM value of 0.97 0.02, indicating high coincidence with the paired DSA images. In the visual evaluation, the median ratings of the DL-generated angiograms were similar to or better than those of the original DSA images for all 24 sequences. Conclusion:The deep learning-based model provided clinically useful cerebral angiograms free from clinically significant artifacts directly from dynamic angiograms.
Background We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. Methods Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers’ assessments were calculated. Results In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader’s sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14–1.30), 1.00 (1.00–1.01), 1.03 (1.02–1.04), 1.07 (1.03–1.11), and 1.02 (1.01–1.03) by using the CAD, respectively. Conclusion The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment.
Aims We aimed to develop models to detect aortic stenosis (AS) from chest radiographs—one of the most basic imaging tests—with artificial intelligence. Methods and Results We used 10433 retrospectively collected digital chest radiographs from 5638 patients to train, validate, and test three deep learning models. Chest radiographs were collected from patients who had also undergone echocardiography at a single institution between July 2016 and May 2019. These were labelled from the corresponding echocardiography assessments as AS-positive or AS-negative. The radiographs were separated on a patient basis into training (8327 images from 4512 patients, mean age 65 ± [SD] 15 years), validation (1041 images from 563 patients, mean age 65 ± 14 years), and test (1065 images from 563 patients, mean age 65 ± 14 years) datasets. The soft voting-based ensemble of the three developed models had the best overall performance for predicting AS with an AUC, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of 0.83 (95% CI 0.77–0.88), 0.78 (0.67–0.86), 0.71 (0.68–0.73), 0.71 (0.68–0.74), 0.18 (0.14–0.23), and 0.97 (0.96–0.98), respectively, in the validation dataset and 0.83 (0.78–0.88), 0.83 (0.74–0.90), 0.69 (0.66–0.72), 0.71 (0.68–0.73), 0.23 (0.19–0.28), and 0.97 (0.96–0.98), respectively, in the test dataset. Conclusion Deep learning models using chest radiographs have the potential to differentiate between radiographs of patients with and without AS. Lay summary We created AI models using deep learning to identify aortic stenosis from chest radiographs. Three AI models were developed and evaluated with 10433 retrospectively collected radiographs and labelled from echocardiography reports. The ensemble AI model could detect aortic stenosis in a test dataset with an AUC of 0.83 (95% CI 0.78–0.88). Since chest radiography is a cost effective and widely available imaging test, our model can provide an additive resource for the detection of aortic stenosis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.