Colposcopy is widely used to detect cervical cancers, but experienced physicians who are needed for an accurate diagnosis are lacking in developing countries. Artificial intelligence (AI) has been recently used in computer-aided diagnosis showing remarkable promise. In this study, we developed and validated deep learning models to automatically classify cervical neoplasms on colposcopic photographs. Pre-trained convolutional neural networks were fine-tuned for two grading systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The multi-class classification accuracies of the networks for the CIN system in the test dataset were 48.6 ± 1.3% by Inception-Resnet-v2 and 51.7 ± 5.2% by Resnet-152. The accuracies for the LAST system were 71.8 ± 1.8% and 74.7 ± 1.8%, respectively. The area under the curve (AUC) for discriminating high-risk lesions from low-risk lesions by Resnet-152 was 0.781 ± 0.020 for the CIN system and 0.708 ± 0.024 for the LAST system. The lesions requiring biopsy were also detected efficiently (AUC, 0.947 ± 0.030 by Resnet-152), and presented meaningfully on attention maps. These results may indicate the potential of the application of AI for automated reading of colposcopic photographs. Cervical cancer is the fourth most common cancer in women worldwide, and the second most common cancer among females in developing countries 1. Screening is the principal prevention method aimed at reducing mortality rates. Screening includes certain steps, including population-based Papanicolaou (Pap) testing, colposcopydirected biopsy of suspicious lesions, and the treatment of confirmed pre-cancer lesions 2,3. In women with low-grade intraepithelial lesions (LSIL) or high-grade intraepithelial lesions (HSIL), the risk of pre-cancer is medium to high, and immediate referral for colposcopy is necessary. However, referring all women with atypical squamous cells of undetermined significance (ASC-US) is considered inefficient, as the risk of such cases being pre-cancerous is lower 4. Screening programs have been successful in the developed countries, leading to an approximately 80% decrease in the cervical cancer incidence over the past 4 decades. In contrast, the increase in cervical cancer incidence reported in developing countries 5 has been attributed to the unsuccessful implementation of screening programs. This, has been attributed to logistics in health systems, infrastructural inadequacies, and the lack of expert physicians capable of introducing screening programs and follow-up 6 .
Background: Classification of colorectal neoplasms during colonoscopic examination is important to avoid unnecessary endoscopic biopsy or resection. This study aimed to develop and validate deep learning models that automatically classify colorectal lesions histologically on white-light colonoscopy images. Methods: White-light colonoscopy images of colorectal lesions exhibiting pathological results were collected and classified into seven categories: stages T1-4 colorectal cancer (CRC), high-grade dysplasia (HGD), tubular adenoma (TA), and non-neoplasms. The images were then re-classified into four categories including advanced CRC, early CRC/HGD, TA, and non-neoplasms. Two convolutional neural network models were trained, and the performances were evaluated in an internal test dataset and an external validation dataset. Results: In total, 3828 images were collected from 1339 patients. The mean accuracies of ResNet-152 model for the seven-category and four-category classification were 60.2% and 67.3% in the internal test dataset, and 74.7% and 79.2% in the external validation dataset, respectively, including 240 images. In the external validation, ResNet-152 outperformed two endoscopists for four-category classification, and showed a higher mean area under the curve (AUC) for detecting TA+ lesions (0.818) compared to the worst-performing endoscopist. The mean AUC for detecting HGD+ lesions reached 0.876 by Inception-ResNet-v2. Conclusions: A deep learning model presented promising performance in classifying colorectal lesions on white-light colonoscopy images; this model could help endoscopists build optimal treatment strategies.
This study was conducted to develop a convolutional neural network (CNN)-based model to predict the sex and age of patients by identifying unique unknown features from paranasal sinus (PNS) X-ray images. We employed a retrospective study design and used anonymized patient imaging data. Two CNN models, adopting ResNet-152 and DenseNet-169 architectures, were trained to predict sex and age groups (20–39, 40–59, 60+ years). The area under the curve (AUC), algorithm accuracy, sensitivity, and specificity were assessed. Class-activation map (CAM) was used to detect deterministic areas. A total of 4160 PNS X-ray images were collected from 4160 patients. The PNS X-ray images of patients aged ≥20 years were retrieved from the picture archiving and communication database system of our institution. The classification performances in predicting the sex (male vs female) and 3 age groups (20–39, 40–59, 60+ years) for each established CNN model were evaluated. For sex prediction, ResNet-152 performed slightly better (accuracy = 98.0%, sensitivity = 96.9%, specificity = 98.7%, and AUC = 0.939) than DenseNet-169. CAM indicated that maxillary sinuses (males) and ethmoid sinuses (females) were major factors in identifying sex. Meanwhile, for age prediction, the DenseNet-169 model was slightly more accurate in predicting age groups (77.6 ± 1.5% vs 76.3 ± 1.1%). CAM suggested that the maxillary sinus and the periodontal area were primary factors in identifying age groups. Our deep learning model could predict sex and age based on PNS X-ray images. Therefore, it can assist in reducing the risk of patient misidentification in clinics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.