Purpose To analyze how automatic segmentation translates in accuracy and precision to morphology and relaxometry compared with manual segmentation and increases the speed and accuracy of the work flow that uses quantitative magnetic resonance (MR) imaging to study knee degenerative diseases such as osteoarthritis (OA). Materials and Methods This retrospective study involved the analysis of 638 MR imaging volumes from two data cohorts acquired at 3.0 T: (a) spoiled gradient-recalled acquisition in the steady state T1ρ-weighted images and (b) three-dimensional (3D) double-echo steady-state (DESS) images. A deep learning model based on the U-Net convolutional network architecture was developed to perform automatic segmentation. Cartilage and meniscus compartments were manually segmented by skilled technicians and radiologists for comparison. Performance of the automatic segmentation was evaluated on Dice coefficient overlap with the manual segmentation, as well as by the automatic segmentations’ ability to quantify, in a longitudinally repeatable way, re-laxometry and morphology. Results The models produced strong Dice coefficients, particularly for 3D-DESS images, ranging between 0.770 and 0.878 in the cartilage compartments to 0.809 and 0.753 for the lateral meniscus and medial meniscus, respectively. The models averaged 5 seconds to generate the automatic segmentations. Average correlations between manual and automatic quantification of T1ρ and T2 values were 0.8233 and 0.8603, respectively, and 0.9349 and 0.9384 for volume and thickness, respectively. Longitudinal precision of the automatic method was comparable with that of the manual one. Conclusion U-Net demonstrates efficacy and precision in quickly generating accurate segmentations that can be used to extract relaxation times and morphologic characterization and values that can be used in the monitoring and diagnosis of OA.
Purpose To develop and compare with classical ROI-based approach, a fully-automatic, local and unbiased way of studying the knee T1ρ relaxation time by creating an atlas and using Voxel Based Relaxometry (VBR) in OA and ACL subjects Materials and Methods In this study 110 subjects from 2 cohorts: (i) Mild OA 40 patients with mild-OA KL ≤ 2 and 15 controls KL ≤ 1; (ii) ACL cohort (a model for early OA): 40 ACL-injured patients imaged prior to ACL reconstruction and 1-year post-surgery and 15 controls are analyzed. All the subjects were acquired at 3T with a protocol that includes: 3D-FSE (CUBE) and 3D-T1ρ. A Non-rigid registration technique was applied to align all the images on a single template. This allows for performing VBR to assess local statistical differences of T1ρ values using z-score analysis. VBR results are compared with those obtained with classical ROI-based technique Results ROI-based results from atlas-based segmentation were consistent with classical ROI-based method (CV = 3.83%). Voxel-based group analysis revealed local patterns that were overlooked by ROI-based approach; e.g. VBR showed posterior lateral femur and posterior lateral tibia significant T1ρ elevations in ACL injured patients (sample mean z-score=9.7 and 10.3). Those elevations were overlooked by the classical ROI-based approach (sample mean z-score =1.87, and −1.73) Conclusion VBR is a feasible and accurate tool for the local evaluation of the biochemical composition of knee articular cartilage. VBR is capable of detecting specific local patterns on T1ρ maps in OA and ACL subjects
Background: Semiquantitative assessment of MRI plays a central role in musculoskeletal research; however, in the clinical setting MRI reports often tend to be subjective and qualitative. Grading schemes utilized in research are not used because they are extraordinarily time-consuming and unfeasible in clinical practice. Purpose: To evaluate the ability of deep-learning models to detect and stage severity of meniscus and patellofemoral cartilage lesions in osteoarthritis and anterior cruciate ligament (ACL) subjects. Study Type: Retrospective study aimed to evaluate a technical development. Population: In all, 1478 MRI studies, including subjects at various stages of osteoarthritis and after ACL injury and reconstruction. Field Strength/Sequence: 3T MRI, 3D FSE CUBE. Assessment: Automatic segmentation of cartilage and meniscus using 2D U-Net, automatic detection, and severity staging of meniscus and cartilage lesion with a 3D convolutional neural network (3D-CNN). Statistical Tests: Receiver operating characteristic (ROC) curve, specificity and sensitivity, and class accuracy. Results: Sensitivity of 89.81% and specificity of 81.98% for meniscus lesion detection and sensitivity of 80.0% and specificity of 80.27% for cartilage were achieved. The best performances for staging lesion severity were obtained by including demographics factors, achieving accuracies of 80.74%, 78.02%, and 75.00% for normal, small, and complex large lesions, respectively. Data Conclusion: In this study we provide a proof of concept of a fully automated deep-learning pipeline that can identify the presence of meniscal and patellar cartilage lesions. This pipeline has also shown potential in making more in-depth examinations of lesion subjects for multiclass prediction and severity staging. Level of Evidence: 2 Technical Efficacy: Stage 2
Osteoarthritis (OA) classification in the knee is most commonly done with radiographs using the 0-4 Kellgren Lawrence (KL) grading system where 0 is normal, 1 shows doubtful signs of OA, 2 is mild OA, 3 is moderate OA, and 4 is severe OA. KL grading is widely used for clinical assessment and diagnosis of OA, usually on a high volume of radiographs, making its automation highly relevant. We propose a fully automated algorithm for the detection of OA using KL gradings with a stateof-the-art neural network. Four thousand four hundred ninety bilateral PA fixed-flexion knee radiographs were collected from the Osteoarthritis Initiative dataset (age = 61.2 ± 9.2 years, BMI = 32.8 ± 15.9 kg/m 2 , 42/58 male/female split) for six different time points. The left and right knee joints were localized using a U-net model. These localized images were used to train an ensemble of DenseNet neural network architectures for the prediction of OA severity. This ensemble of DenseNets' testing sensitivity rates of no OA, mild, moderate, and severe OAwere 83.7, 70.2, 68.9, and 86.0% respectively. The corresponding specificity rates were 86.1, 83.8, 97.1, and 99.1%. Using saliency maps, we confirmed that the neural networks producing these results were in fact selecting the correct osteoarthritic features used in detection. These results suggest the use of our automatic classifier to assist radiologists in making more accurate and precise diagnosis with the increasing volume of radiographic image being taken in clinic.
Purpose: Hip fractures are a common cause of morbidity and mortality. Automatic identification and classification of hip fractures using deep learning may improve outcomes by reducing diagnostic errors and decreasing time to operation. Methods: Hip and pelvic radiographs from 1118 studies were reviewed and 3034 hips were labeled via bounding boxes and classified as normal, displaced femoral neck fracture, nondisplaced femoral neck fracture, intertrochanteric fracture, previous ORIF, or previous arthroplasty. A deep learning-based object detection model was trained to automate the placement of the bounding boxes. A Densely Connected Convolutional Neural Network (DenseNet) was trained on a subset of the bounding box images, and its performance evaluated on a held out test set and by comparison on a 100-image subset to two groups of human observers: fellowshiptrained radiologists and orthopaedists, and senior residents in emergency medicine, radiology, and orthopaedics. Results: The binary accuracy for fracture of our model was 93.8% (95% CI, 91.3-95.8%), with sensitivity of 92.7% (95% CI, 88.7-95.6%), and specificity 95.0% (95% CI, 91.5-97.3%). Multiclass classification accuracy was 90.4% (95% CI, 87.4-92.9%). When compared to human observers, our model achieved at least expert-level classification under all conditions. Additionally, when the model was used as an aid, human performance improved, with aided resident performance approximating unaided fellowship-trained expert performance. Conclusions: Our deep learning model identified and classified hip fractures with at least expert-level accuracy, and when used as an aid improved human performance, with aided resident performance approximating that of unaided fellowship-trained attendings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.