Purpose To demonstrate the value of pretraining with millions of radiologic images compared with ImageNet photographic images on downstream medical applications when using transfer learning. Materials and Methods This retrospective study included patients who underwent a radiologic study between 2005 and 2020 at an outpatient imaging facility. Key images and associated labels from the studies were retrospectively extracted from the original study interpretation. These images were used for RadImageNet model training with random weight initiation. The RadImageNet models were compared with ImageNet models using the area under the receiver operating characteristic curve (AUC) for eight classification tasks and using Dice scores for two segmentation problems. Results The RadImageNet database consists of 1.35 million annotated medical images in 131 872 patients who underwent CT, MRI, and US for musculoskeletal, neurologic, oncologic, gastrointestinal, endocrine, abdominal, and pulmonary pathologic conditions. For transfer learning tasks on small datasets—thyroid nodules (US), breast masses (US), anterior cruciate ligament injuries (MRI), and meniscal tears (MRI)—the RadImageNet models demonstrated a significant advantage ( P < .001) to ImageNet models (9.4%, 4.0%, 4.8%, and 4.5% AUC improvements, respectively). For larger datasets—pneumonia (chest radiography), COVID-19 (CT), SARS-CoV-2 (CT), and intracranial hemorrhage (CT)—the RadImageNet models also illustrated improved AUC ( P < .001) by 1.9%, 6.1%, 1.7%, and 0.9%, respectively. Additionally, lesion localizations of the RadImageNet models were improved by 64.6% and 16.4% on thyroid and breast US datasets, respectively. Conclusion RadImageNet pretrained models demonstrated better interpretability compared with ImageNet models, especially for smaller radiologic datasets. Keywords: CT, MR Imaging, US, Head/Neck, Thorax, Brain/Brain Stem, Evidence-based Medicine, Computer Applications–General (Informatics) Supplemental material is available for this article. Published under a CC BY 4.0 license. See also the commentary by Cadrin-Chênevert in this issue.
For accurate diagnosis of interstitial lung disease (ILD), a consensus of radiologic, pathological, and clinical findings is vital. Management of ILD also requires thorough follow-up with computed tomography (CT) studies and lung function tests to assess disease progression, severity, and response to treatment. However, accurate classification of ILD subtypes can be challenging, especially for those not accustomed to reading chest CTs regularly. Dynamic models to predict patient survival rates based on longitudinal data are challenging to create due to disease complexity, variation, and irregular visit intervals. Here, we utilize RadImageNet pretrained models to diagnose five types of ILD with multimodal data and a transformer model to determine a patient’s 3-year survival rate. When clinical history and associated CT scans are available, the proposed deep learning system can help clinicians diagnose and classify ILD patients and, importantly, dynamically predict disease progression and prognosis.
Background: Patellofemoral anatomy has not been well characterized. Applying deep learning to automatically measure knee anatomy can provide a better understanding of anatomy, which can be a key factor in improving outcomes. Methods: 483 total patients with knee CT imaging (April 2017–May 2022) from 6 centers were selected from a cohort scheduled for knee arthroplasty and a cohort with healthy knee anatomy. A total of 7 patellofemoral landmarks were annotated on 14,652 images and approved by a senior musculoskeletal radiologist. A two-stage deep learning model was trained to predict landmark coordinates using a modified ResNet50 architecture initialized with self-supervised learning pretrained weights on RadImageNet. Landmark predictions were evaluated with mean absolute error, and derived patellofemoral measurements were analyzed with Bland–Altman plots. Statistical significance of measurements was assessed by paired t-tests. Results: Mean absolute error between predicted and ground truth landmark coordinates was 0.20/0.26 cm in the healthy/arthroplasty cohort. Four knee parameters were calculated, including transepicondylar axis length, transepicondylar-posterior femur axis angle, trochlear medial asymmetry, and sulcus angle. There were no statistically significant parameter differences (p > 0.05) between predicted and ground truth measurements in both cohorts, except for the healthy cohort sulcus angle. Conclusion: Our model accurately identifies key trochlear landmarks with ~0.20–0.26 cm accuracy and produces human-comparable measurements on both healthy and pathological knees. This work represents the first deep learning regression model for automated patellofemoral annotation trained on both physiologic and pathologic CT imaging at this scale. This novel model can enhance our ability to analyze the anatomy of the patellofemoral compartment at scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.