Purpose: To evaluate the performance of an artificial intelligence (AI) algorithm in a simulated screening setting and its effectiveness in detecting missed and interval cancers. Methods: Digital mammograms were collected from Bahcesehir Mammographic Screening Program which is the first organized, population-based, 10-year (2009-2019) screening program in Turkey. In total, 211 mammograms were extracted from the archive of the screening program in this retrospective study. One hundred ten of them were diagnosed as breast cancer (74 screen-detected, 27 interval, 9 missed), 101 of them were negative mammograms with a follow-up for at least 24 months. Cancer detection rates of radiologists in the screening program were compared with an AI system. Three different mammography assessment methods were used: (1) 2 radiologists’ assessment at screening center, (2) AI assessment based on the established risk score threshold, (3) a hypothetical radiologist and AI team-up in which AI was considered to be the third reader. Results: Area under curve was 0.853 (95% CI = 0.801-0.905) and the cut-off value for risk score was 34.5% with a sensitivity of 72.8% and a specificity of 88.3% for AI cancer detection in ROC analysis. Cancer detection rates were 67.3% for radiologists, 72.7% for AI, and 83.6% for radiologist and AI team-up. AI detected 72.7% of all cancers on its own, of which 77.5% were screen-detected, 15% were interval cancers, and 7.5% were missed cancers. Conclusion: AI may potentially enhance the capacity of breast cancer screening programs by increasing cancer detection rates and decreasing false-negative evaluations.
Purpose: Investigating whether a Keras-based Convolutional Neural Networks (CNN) model could detect glaucoma suspect cases from glaucoma cases without a visual field test and the effects of open-source data preprocessing in AI based glaucoma detection.Methods: 398 glaucoma and 378 glaucoma suspect cases approved by a glaucoma specialist ophthalmologist were enrolled in this study. Fundus images were retrieved from an optical coherence tomography device. An open-source graphic software was used to create training sets. There were three distinct groups: fundus-centered cropped images, grayscale version of said images with an auto white balance option to enhance the features and in addition to the conversions stated above, horizontal, vertical, and horizontal plus vertical flips were applied in the third group. Cropped images were used to train our Keras-based CNN model with 49 deep layers. Model fit was designed at 50 epochs per run and the performance metrics for each run were recorded. Normality was assessed with the Shapiro-Wilk test. A one-way ANOVA was applied to analyze each image set's validation accuracy. Bonferroni corrections were applied if appropriate. Demographics of the patients were analyzed with the Mann-Whitney-U test and the chi-2 test.Results: The mean age of glaucoma patients and glaucoma suspected patients showed a statistically significant difference (P-value < 0.001, mean ± standard deviation 62 ± 15 and 45 ± 15 respectively). Gender did not show a statistically significant difference between groups (P-value = 0.388). Validation accuracy scores in groups 1,2 and 3 were 0.71 ± 0.02, 0.77± 0.02 and 0.85±0.03 respectively (mean ± standard deviation) (P-value < 0.001). The sensitivities and specificities between groups were different and those differences were found to be statistically significant (P-value < 0.001 and P-value < 0.001, respectively).Conclusion: In this report, we introduce open-source and easy-to-deploy image pre-processing methods to improve the outcome of glaucoma detection from glaucoma suspected cases in stereoscopic optic disc photography-derived fundus images that could be used with any CNN-based computer-aided diagnosis system without requiring a visual field test, contributing to decreasing the burden associated with undiagnosed glaucoma progression.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.