Three-dimensional (3D)-image-based anatomical analysis of rotator cuff tear patients has been proposed as a way to improve repair prognosis analysis to reduce the incidence of postoperative retear. However, for application in clinics, an efficient and robust method for the segmentation of anatomy from MRI is required. We present the use of a deep learning network for automatic segmentation of the humerus, scapula, and rotator cuff muscles with integrated automatic result verification. Trained on N = 111 and tested on N = 60 diagnostic T1-weighted MRI of 76 rotator cuff tear patients acquired from 19 centers, a nnU-Net segmented the anatomy with an average Dice coefficient of 0.91 ± 0.06. For the automatic identification of inaccurate segmentations during the inference procedure, the nnU-Net framework was adapted to allow for the estimation of label-specific network uncertainty directly from its subnetworks. The average Dice coefficient of segmentation results from the subnetworks identified labels requiring segmentation correction with an average sensitivity of 1.0 and a specificity of 0.94. The presented automatic methods facilitate the use of 3D diagnosis in clinical routine by eliminating the need for time-consuming manual segmentation and slice-by-slice segmentation verification.
We believe that the presented approach will lead to a faster integration of LRS-based registration techniques in the surgical environment. Further studies will focus on optimizing scanning time and on the respiratory motion compensation.
Fat fraction of the rotator cuff muscles has been shown to be a predictor of rotator cuff repair failure. In clinical diagnosis, fat fraction of the affected muscle is typically assessed visually on the oblique 2D Y-view and categorized according to the Goutallier scale on T1 weighted MRI. To enable a quantitative fat fraction measure of the rotator cuff muscles, an automated analysis of the whole muscle and Y-view slice was developed utilizing 2-point Dixon MRI. 3D nn-Unet were trained on water only 2-point Dixon data and corresponding annotations for the automatic segmentation of the supraspinatus, humerus and scapula and the detection of 3 anatomical landmarks for the automatic reconstruction of the Y-view slice. The supraspinatus was segmented with a Dice coefficient of 90% (N=24) and automatic fat fraction measurements with a difference from manual measurements of 1.5 % for whole muscle and 0.6% for Y-view evaluation (N=21) were observed. The presented automatic analysis demonstrates the feasibility of a 3D quantification of fat fraction of the rotator cuff muscles for the investigation of more accurate predictors of rotator cuff repair outcome.
Rotator cuff tears (RCT) are one of the most common sources of shoulder pain. Many factors can be considered to choose the right surgical treatment procedure. Of the most important factors are the tear retraction and tear width, assessed manually on preoperative MRI.A novel approach to automatically quantify a rotator cuff tear, based on the segmentation of the tear from MRI images, was developed and validated. For segmentation, a neural network was trained and methods for the automatic calculation of the tear width and retraction from the segmented tear volume were developed.The accuracy of the automatic segmentation and the automated tear analysis were evaluated relative to manual consensus segmentations by two clinical experts. Variance in the manual segmentations was assessed in an interrater variability study of two clinical experts.The accuracy of the tear retraction calculation based on the developed automatic tear segmentation was 5.3 mm ± 5.0 mm in comparison to the interrater variability of tear retraction calculation based on manual segmentations of 3.6 mm ± 2.9 mm.These results show that an automatic quantification of a rotator cuff tear is possible. The large interrater variability of manual segmentation-based measurements highlights the difficulty of the tear segmentations task in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.