Digital breast tomosynthesis (DBT) is an emerging breast cancer screening and diagnostic modality that uses quasi-three-dimensional breast images to provide detailed assessments of the dense tissue within the breast. In this study, a framework of a 3D-Mask region-based convolutional neural network (3D-Mask RCNN) computer-aided diagnosis (CAD) system was developed for mass detection and segmentation with a comparative analysis of performance on patient subgroups with different clinicopathological characteristics. To this end, 364 samples of DBT data were used and separated into a training dataset ( n = 201) and a testing dataset ( n = 163). The detection and segmentation results were evaluated on the testing set and on subgroups of patients with different characteristics, including different age ranges, lesion sizes, histological types, lesion shapes and breast densities. The results of our 3D-Mask RCNN framework were compared with those of the 2D-Mask RCNN and Faster RCNN methods. For lesion-based mass detection, the sensitivity of 3D-Mask RCNN-based CAD was 90% with 0.8 false positives (FPs) per lesion, whereas the sensitivity of the 2D-Mask RCNN- and Faster RCNN-based CAD was 90% at 1.3 and 2.37 FPs/lesion, respectively. For breast-based mass detection, the 3D-Mask RCNN generated a sensitivity of 90% at 0.83 FPs/breast, and this framework is better than the 2D-Mask RCNN and Faster RCNN, which generated a sensitivity of 90% with 1.24 and 2.38 FPs/breast, respectively. Additionally, the 3D-Mask RCNN achieved significantly ( p < 0.05) better performance than the 2D methods on subgroups of samples with characteristics of ages ranged from 40 to 49 years, malignant tumors, spiculate and irregular masses and dense breast, respectively. Lesion segmentation using the 3D-Mask RCNN achieved an average precision (AP) of 0.934 and a false negative rate (FNR) of 0.053, which are better than those achieved by the 2D methods. The results suggest that the 3D-Mask RCNN CAD framework has advantages over 2D-based mass detection on both the whole data and subgroups with different characteristics.
Background The diagnostic results of magnetic resonance imaging (MRI) are essential references for arthroscopy as an invasive procedure. A deviation between medical imaging diagnosis and arthroscopy results may cause irreversible damage to patients and lead to excessive medical treatment. To improve the accurate diagnosis of meniscus injury, it is urgent to develop auxiliary diagnosis algorithms to improve the accuracy of radiological diagnosis. Purpose This study aims to present a fully automatic 3D deep convolutional neural network (DCNN) for meniscus segmentation and detects arthroscopically proven meniscus tears. Materials and methods Our institution retrospectively included 533 patients with 546 knees who underwent knee magnetic resonance imaging (MRI) and knee arthroscopy. Sagittal proton density-weighted (PDW) images in MRI of 382 knees were regarded as a training set to train our 3D-Mask RCNN. The remaining data from 164 knees were used to validate the trained network as a test set. The masks were hand-drawn by an experienced radiologist, and the reference standard is arthroscopic surgical reports. The performance statistics included Dice accuracy, sensitivity, specificity, FROC, receiver operating characteristic (ROC) curve analysis, and bootstrap test statistics. The segmentation performance was compared with a 3D-Unet, and the detection performance was compared with radiological evaluation by two experienced musculoskeletal radiologists without knowledge of the arthroscopic surgical diagnosis. Results Our model produced strong Dice coefficients for sagittal PDW of 0.924, 0.95 sensitivity with 0.823 FPs/knee. 3D-Unet produced a Dice coefficient for sagittal PDW of 0.891, 0.95 sensitivity with 1.355 FPs/knee. The difference in the areas under 3D-Mask-RCNN FROC and 3D-Unet FROC was statistically significant (p = 0.0011) by bootstrap test. Our model detection performance achieved an area under the curve (AUC) value, accuracy, and sensitivity of 0.907, 0.924, 0.941, and 0.785, respectively. Based on the radiological evaluations, the AUC value, accuracy, sensitivity, and specificity were 0.834, 0.835, 0.889, and 0.754, respectively. The difference in the areas between 3D-Mask-RCNN ROC and radiological evaluation ROC was statistically significant (p = 0.0009) by bootstrap test. 3D Mask RCNN significantly outperformed the 3D-Unet and radiological evaluation demonstrated by these results. Conclusions 3D-Mask RCNN has demonstrated efficacy and precision for meniscus segmentation and tear detection in knee MRI, which can assist radiologists in improving the accuracy and efficiency of diagnosis. It can also provide effective diagnostic indicators for orthopedic surgeons before arthroscopic surgery and further promote precise treatment.
Background The diagnostic results of MRI are important references for arthroscopy as an invasive procedure. A deviation between medical imaging diagnosis and arthroscopy results may cause irreversible damage to patients and lead to excessive medical treatment. To improve the accurate diagnosis of meniscus injury, it is urgent to develop auxiliary diagnosis algorithms to improve the accuracy of radiological diagnosis. Purpose: The purpose of the study is to present a fully automatic 3D deep convolutional neural network (DCNN) for meniscus segmentation and to detect arthroscopically proven meniscus tears. Materials and methods: Our institution retrospectively included 533 patients with 546 knees who underwent knee magnetic resonance imaging (MRI) and knee arthroscopy. The sagittal proton density-weighted (PDW) images in MRI of 382 knees were regarded as a training set to train our 3D-Mask RCNN. The remaining data from 164 knees were used to validate the trained network as a test set. The masks were hand-drawn by an experienced radiologist, and the reference standard is arthroscopic surgical reports. The performance statistics included Dice accuracy, sensitivity, specificity, FROC, receiver operating characteristic (ROC) curve analysis, and bootstrap test statistics. The segmentation performance was compared with a 3D-Unet, and the detection performance was compared with radiological evaluation by two experienced musculoskeletal radiologists without knowledge of the arthroscopic surgical diagnosis. Results: Our model produced strong Dice coefficients for sagittal PDW of 0.924, 0.95 sensitivity with 0.823 FPs/knee. 3D-Unet produced a Dice coefficient for sagittal PDW of 0.891, 0.95 sensitivity with 1.355 FPs/knee. The difference in the areas under 3D-Mask-RCNN FROC and 3D-Unet FROC was statistically significant (p=0.0011) by bootstrap test. Our model detection performance achieved an area under the curve (AUC) value, accuracy, and sensitivity of 0.907, 0.924, 0.941, and 0.785, respectively. Based on the radiological evaluations, AUC value, accuracy, sensitivity, and specificity were 0.834, 0.835, 0.889, and 0.754, respectively. The difference in the areas between 3D-Mask-RCNN ROC and radiological evaluation ROC was statistically significant (p=0.0009) by bootstrap test. 3D Mask RCNN significantly outperformed the 3D-Unet and radiological evaluation that were demonstrated by These results. Conclusion: 3D-Mask RCNN has demonstrated efficacy and precision for meniscus segmentation and tear detection in knee MRI, which can assist radiologists in improving the accuracy and efficiency of diagnosis. It can also provide effective diagnostic indicators for orthopedic surgeons before arthroscopic surgery and further promote precise treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.