PurposeTo compare the mammographic malignant architectural distortion (AD) detection performance of radiologists who read mammographic examinations unaided versus those who read these examinations with the support of artificial intelligence (AI) systems.Material and MethodsThis retrospective case-control study was based on a double-reading of clinical mammograms between January 2011 and December 2016 at a large tertiary academic medical center. The study included 177 malignant and 90 benign architectural distortion (AD) patients. The model was built based on the ResNeXt-50 network. Algorithms used deep learning convolutional neural networks, feature classifiers, image analysis algorithms to depict AD and output a score that translated to malignant. The accuracy for malignant AD detection was evaluated using area under the curve (AUC).ResultsThe overall AUC was 0.733 (95% CI, 0.673-0.792) for Reader First-1, 0.652 (95% CI, 0.586-0.717) for Reader First-2, and 0.655 (95% CI, 0.590-0.719) for Reader First-3. and the overall AUCs for Reader Second-1, 2, 3 were 0.875 (95% CI, 0.830-0.919), 0.882 (95% CI, 0.839-0.926), 0.884 (95% CI, 0.841-0.927),respectively. The AUCs for all the reader-second radiologists were significantly higher than those for all the reader-first radiologists (Reader First-1 vs. Reader Second-1, P= 0.004). The overall AUC was 0.792 (95% CI, 0.660-0.925) for AI algorithms. The combination assessment of AI algorithms and Reader First-1 achieved an AUC of 0.880 (95% CI, 0.793-0.968), increased than the Reader First-1 alone and AI algorithms alone. AI algorithms alone achieved a specificity of 61.1% and a sensitivity of 80.6%. The specificity for Reader First-1 was 55.5%, and the sensitivity was 86.1%. The results of the combined assessment of AI and Reader First-1 showed a specificity of 72.7% and sensitivity of 91.7%. The performance showed significant improvements compared with AI alone (p<0.001) as well as the reader first-1 alone (p=0.006).ConclusionWhile the single AI algorithm did not outperform radiologists, an ensemble of AI algorithms combined with junior radiologist assessments were found to improve the overall accuracy. This study underscores the potential of using machine learning methods to enhance mammography interpretation, especially in remote areas and primary hospitals.
ObjectiveIn order to explore the relationship between mammographic density of breast mass and its surrounding area and benign or malignant breast, this paper proposes a deep learning model based on C2FTrans to diagnose the breast mass using mammographic density.MethodsThis retrospective study included patients who underwent mammographic and pathological examination. Two physicians manually depicted the lesion edges and used a computer to automatically extend and segment the peripheral areas of the lesion (0, 1, 3, and 5 mm, including the lesion). We then obtained the mammary glands’ density and the different regions of interest (ROI). A diagnostic model for breast mass lesions based on C2FTrans was constructed based on a 7: 3 ratio between the training and testing sets. Finally, receiver operating characteristic (ROC) curves were plotted. Model performance was assessed using the area under the ROC curve (AUC) with 95% confidence intervals (CI), sensitivity, and specificity.ResultsIn total, 401 lesions (158 benign and 243 malignant) were included in this study. The probability of breast cancer in women was positively correlated with age and mass density and negatively correlated with breast gland classification. The largest correlation was observed for age (r = 0.47). Among all models, the single mass ROI model had the highest specificity (91.8%) with an AUC = 0.823 and the perifocal 5mm ROI model had the highest sensitivity (86.9%) with an AUC = 0.855. In addition, by combining the cephalocaudal and mediolateral oblique views of the perifocal 5 mm ROI model, we obtained the highest AUC (AUC = 0.877 P < 0.001).ConclusionsDeep learning model of mammographic density can better distinguish benign and malignant mass-type lesions in digital mammography images and may become an auxiliary diagnostic tool for radiologists in the future.
BackgroundArchitectural distortion (AD) is a common imaging manifestation of breast cancer, but is also seen in benign lesions. This study aimed to construct deep learning models using mask regional convolutional neural network (Mask-RCNN) for AD identification in full-field digital mammography (FFDM) and evaluate the performance of models for malignant AD diagnosis.MethodsThis retrospective diagnostic study was conducted at the Second Affiliated Hospital of Guangzhou University of Chinese Medicine between January 2011 and December 2020. Patients with AD in the breast in FFDM were included. Machine learning models for AD identification were developed using the Mask RCNN method. Receiver operating characteristics (ROC) curves, their areas under the curve (AUCs), and recall/sensitivity were used to evaluate the models. Models with the highest AUCs were selected for malignant AD diagnosis.ResultsA total of 349 AD patients (190 with malignant AD) were enrolled. EfficientNetV2, EfficientNetV1, ResNext, and ResNet were developed for AD identification, with AUCs of 0.89, 0.87, 0.81 and 0.79. The AUC of EfficientNetV2 was significantly higher than EfficientNetV1 (0.89 vs. 0.78, P=0.001) for malignant AD diagnosis, and the recall/sensitivity of the EfficientNetV2 model was 0.93.ConclusionThe Mask-RCNN-based EfficientNetV2 model has a good diagnostic value for malignant AD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.