Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1ρ-weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations’ ability to quantify, in a longitudinally repeatable way, re-laxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA.
Osteoarthritis (OA) classification in the knee is most commonly done with radiographs using the 0-4 Kellgren Lawrence (KL) grading system where 0 is normal, 1 shows doubtful signs of OA, 2 is mild OA, 3 is moderate OA, and 4 is severe OA. KL grading is widely used for clinical assessment and diagnosis of OA, usually on a high volume of radiographs, making its automation highly relevant. We propose a fully automated algorithm for the detection of OA using KL gradings with a stateof-the-art neural network. Four thousand four hundred ninety bilateral PA fixed-flexion knee radiographs were collected from the Osteoarthritis Initiative dataset (age = 61.2 ± 9.2 years, BMI = 32.8 ± 15.9 kg/m 2 , 42/58 male/female split) for six different time points. The left and right knee joints were localized using a U-net model. These localized images were used to train an ensemble of DenseNet neural network architectures for the prediction of OA severity. This ensemble of DenseNets' testing sensitivity rates of no OA, mild, moderate, and severe OAwere 83.7, 70.2, 68.9, and 86.0% respectively. The corresponding specificity rates were 86.1, 83.8, 97.1, and 99.1%. Using saliency maps, we confirmed that the neural networks producing these results were in fact selecting the correct osteoarthritic features used in detection. These results suggest the use of our automatic classifier to assist radiologists in making more accurate and precise diagnosis with the increasing volume of radiographic image being taken in clinic.
Background: Semiquantitative assessment of MRI plays a central role in musculoskeletal research; however, in the clinical setting MRI reports often tend to be subjective and qualitative. Grading schemes utilized in research are not used because they are extraordinarily time-consuming and unfeasible in clinical practice. Purpose: To evaluate the ability of deep-learning models to detect and stage severity of meniscus and patellofemoral cartilage lesions in osteoarthritis and anterior cruciate ligament (ACL) subjects. Study Type: Retrospective study aimed to evaluate a technical development. Population: In all, 1478 MRI studies, including subjects at various stages of osteoarthritis and after ACL injury and reconstruction. Field Strength/Sequence: 3T MRI, 3D FSE CUBE. Assessment: Automatic segmentation of cartilage and meniscus using 2D U-Net, automatic detection, and severity staging of meniscus and cartilage lesion with a 3D convolutional neural network (3D-CNN). Statistical Tests: Receiver operating characteristic (ROC) curve, specificity and sensitivity, and class accuracy. Results: Sensitivity of 89.81% and specificity of 81.98% for meniscus lesion detection and sensitivity of 80.0% and specificity of 80.27% for cartilage were achieved. The best performances for staging lesion severity were obtained by including demographics factors, achieving accuracies of 80.74%, 78.02%, and 75.00% for normal, small, and complex large lesions, respectively. Data Conclusion: In this study we provide a proof of concept of a fully automated deep-learning pipeline that can identify the presence of meniscal and patellar cartilage lesions. This pipeline has also shown potential in making more in-depth examinations of lesion subjects for multiclass prediction and severity staging. Level of Evidence: 2 Technical Efficacy: Stage 2
Objective: We aim to study to what extent conventional and deep-learning-based T 2 relaxometry patterns are able to distinguish between knees with and without radiographic osteoarthritis (OA). Methods: T 2 relaxation time maps were analyzed for 4,384 subjects from the baseline Osteoarthritis Initiative (OAI) Dataset. Voxel Based Relaxometry (VBR) was used for automatic quantification and voxelbased analysis of the differences in T 2 between subjects with and without radiographic OA. A Densely Connected Convolutional Neural Network (DenseNet) was trained to diagnose OA from T 2 data. For comparison, more classical feature extraction techniques and shallow classifiers were used to benchmark the performance of our algorithm's results. Deep and shallow models were evaluated with and without the inclusion of risk factors. Sensitivity and Specificity values and McNemar test were used to compare the performance of the different classifiers. Results: The best shallow model was obtained when the first ten Principal Components, demographics and pain score were included as features (AUC ¼ 77.77%, Sensitivity ¼ 67.01%, Specificity ¼ 71.79%). In comparison, DenseNet trained on raw T 2 data obtained AUC ¼ 83.44%, Sensitivity ¼ 76.99%, Specificity ¼ 77.94%. McNemar test on two misclassified proportions form the shallow and deep model showed that the boost in performance was statistically significant (McNemar's chi-squared ¼ 10.33, degree of freedom (DF) ¼ 1, P-value ¼ 0.0013). Conclusion: In this study, we presented a Magnetic Resonance Imaging (MRI)-based data-driven platform using T 2 measurements to characterize radiographic OA. Our results showed that feature learning from T 2 maps has potential in uncovering information that can potentially better diagnose OA than simple averages or linear patterns decomposition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.