Background Differentiating diagnosis between the benign schwannoma and the malignant counterparts merely by neuroimaging is not always clear and remains still confounding in many cases because of atypical imaging presentation encountered in clinic and the lack of specific diagnostic markers. Purpose To construct and validate a novel deep learning model based on multi-source magnetic resonance imaging (MRI) in automatically differentiating malignant spinal schwannoma from benign. Material and Methods We retrospectively reviewed MRI imaging data from 119 patients with the initial diagnosis of benign or malignant spinal schwannoma confirmed by postoperative pathology. A novel convolutional neural network (CNN)-based deep learning model named GAIN-CP (Guided Attention Inference Network with Clinical Priors) was constructed. An ablation study for the fivefold cross-validation and cross-source experiments were conducted to validate the novel model. The diagnosis performance among our GAIN-CP model, the conventional radiomics model, and the radiologist-based clinical assessment were compared using the area under the receiver operating characteristic curve (AUC) and balanced accuracy (BAC). Results The AUC score of the proposed GAIN method is 0.83, which outperforms the radiomics method (0.65) and the evaluations from the radiologists (0.67). By incorporating both the image data and the clinical prior features, our GAIN-CP achieves an AUC score of 0.95. The GAIN-CP also achieves the best performance on fivefold cross-validation and cross-source experiments. Conclusion The novel GAIN-CP method can successfully classify malignant spinal schwannoma from benign cases using the provided multi-source MR images exhibiting good prospect in clinical diagnosis.
Background: The peri-tumor microenvironment plays an important role in the occurrence, growth and metastasis of cancer. The aim of this study is to explore the value and application of a CT image-based deep learning model of tumors and peri-tumors in predicting the invasiveness of ground-glass nodules (GGNs).Methods: Preoperative thin-section chest CT images were reviewed retrospectively in 622 patients with a total of 687 pulmonary GGNs. GGNs are classified according to clinical management strategies as invasive lesions (IAC) and non-invasive lesions (AAH, AIS and MIA). The two volumes of interest (VOIs) identified on CT were the gross tumor volume (GTV) and the gross volume of tumor incorporating peritumoral region (GPTV). Three dimensional (3D) DenseNet was used to model and predict GGN invasiveness, and five-fold cross validation was performed. We used GTV and GPTV as inputs for the comparison model. Prediction performance was evaluated by sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results:The GTV-based model was able to successfully predict GGN invasiveness, with an AUC of 0.921 (95% CI, 0.896-0.937). Using GPTV, the AUC of the model increased to 0.955 (95% CI, 0.939-0.971). Conclusions:The deep learning method performed well in predicting GGN invasiveness. The predictive ability of the GPTV-based model was more effective than that of the GTV-based model.
Background A high false-positive rate remains a technical glitch hindering the broad spectrum of application of deep-learning-based diagnostic tools in routine radiological practice from assisting in diagnosing rib fractures. Purpose To examine the performance of two versions of deep-learning-based software tools in aiding radiologists in diagnosing rib fractures on chest computed tomography (CT) images. Material and Methods In total, 123 patients (708 rib fractures) were included in this retrospective study. Two groups of radiologists with different experience levels retrospectively reviewed images for rib fractures in the concurrent mode aided with RibFrac-High Sensitivity (HS) and RibFrac-High Precision (HP). We compared their diagnostic performance against the reference standard in terms of sensitivity and positive predictive value (PPV). Results On a per-patient basis, RibFrac-HS exhibited a higher sensitivity compared with RibFrac-HP (mean difference=0.051, 95% CI=0.012–0.090; P = 0.011), whereas the latter significantly outperformed the former in terms of the PPV (mean difference=0.273, 95% CI=0.238–0.308; P < 0.0001). The use of RibFrac-HP significantly improved the junior and the senior groups’ sensitivities respectively by 0.058 (95% CI=0.033–0.083; P < 0.0001) and 0.058 (95% CI=0.034–0.081; P < 0.0001), and decreased the diagnosis time by 206 s (95% CI=191–220; P < 0.0001) and 79 s (95% CI=67–92; P < 0.0001), respectively, when compared to no software assistance. Conclusion The sensitivity and efficiency of radiologists in identifying rib fractures can be improved by using RibFrac-HS and/or RibFrac-HP. With an added module for false-positive suppression, RibFrac-HP maintains the sensitivity and increases the PPV in fracture detection compared to Rib-Frac-HS.
To develop a deep learning-based model for detecting rib fractures on chest X-Ray and to evaluate its performance based on a multicenter study. Chest digital radiography (DR) images from 18,631 subjects were used for the training, testing, and validation of the deep learning fracture detection model. We first built a pretrained model, a simple framework for contrastive learning of visual representations (simCLR), using contrastive learning with the training set. Then, simCLR was used as the backbone for a fully convolutional one-stage (FCOS) objective detection network to identify rib fractures from chest X-ray images. The detection performance of the network for four different types of rib fractures was evaluated using the testing set. A total of 127 images from Data-CZ and 109 images from Data-CH with the annotations for four types of rib fractures were used for evaluation. The results showed that for Data-CZ, the sensitivities of the detection model with no pretraining, pretrained ImageNet, and pretrained DR were 0.465, 0.735, and 0.822, respectively, and the average number of false positives per scan was five in all cases. For the Data-CH test set, the sensitivities of three different pretraining methods were 0.403, 0.655, and 0.748. In the identification of four fracture types, the detection model achieved the highest performance for displaced fractures, with sensitivities of 0.873 and 0.774 for the Data-CZ and Data-CH test sets, respectively, with 5 false positives per scan, followed by nondisplaced fractures, buckle fractures, and old fractures. A pretrained model can significantly improve the performance of the deep learning-based rib fracture detection based on X-ray images, which can reduce missed diagnoses and improve the diagnostic efficacy.
Background Ultra‐high resolution computed tomography (UHRCT) has shown great potential for the detection of pulmonary diseases. However, UHRCT scanning generally induces increases in scanning time and radiation exposure. Super resolution is a gradually prosperous application in CT imaging despite higher radiation dose. Recent works have proved that the convolution neural network especially the generative adversarial network (GAN) based model could generate high‐resolution CT using phantom images or simulated low resolution data without extra dose. Research that used clinical CT particularly lung images are rare due to the difficulty in collecting paired dataset. Purpose To generate clinical UHRCT in lung from low resolution computed tomography (LRCT) using a GAN model. Methods 43 clinical scans with LRCT and UHRCT were collected in this study. Paired patches were selected using the structural similarity index measure (SSIM) and the peak signal‐to‐noise ratio (PSNR) threshold. A relativistic GAN with gradient guidance was trained to learn the mapping from LRCT to UHRCT. The performance of the proposed method was evaluated using PSNR and SSIM. A reader study with five‐point Likert score (five for the worst and one for the best) is also applied to assess the proposed method in terms of general quality, diagnostic confidence, sharpness and denoise level. Results Experimental results show that our method got PSNR 32.60 ± 2.92 and SSIM 0.881 ± 0.057 on our clinical CT dataset, outperforming other state‐of‐the‐art methods based on the simulated scenarios. Moreover, reader study shows that our method reached the good clinical performance in terms of general quality (1.14 ± 0.36), diagnostic confidence (1.36 ± 0.49), sharpness (1.07 ± 0.27) and high denoise level (1.29 ± 0.61) compared to other SR methods. Conclusion This study demonstrated the feasibility of generating UHRCT images from LRCT without longer scanning time or increased radiation dose.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.