BackgroundColposcopy is an important method in the diagnosis of cervical lesions. However, experienced colposcopists are lacking at present, and the training cycle is long. Therefore, the artificial intelligence-based colposcopy-assisted examination has great prospects. In this paper, a cervical lesion segmentation model (CLS-Model) was proposed for cervical lesion region segmentation from colposcopic post-acetic-acid images and accurate segmentation results could provide a good foundation for further research on the classification of the lesion and the selection of biopsy site.MethodsFirst, the improved Faster Region-convolutional neural network (R-CNN) was used to obtain the cervical region without interference from other tissues or instruments. Afterward, a deep convolutional neural network (CLS-Net) was proposed, which used EfficientNet-B3 to extract the features of the cervical region and used the redesigned atrous spatial pyramid pooling (ASPP) module according to the size of the lesion region and the feature map after subsampling to capture multiscale features. We also used cross-layer feature fusion to achieve fine segmentation of the lesion region. Finally, the segmentation result was mapped to the original image.ResultsExperiments showed that on 5455 LSIL+ (including cervical intraepithelial neoplasia and cervical cancer) colposcopic post-acetic-acid images, the accuracy, specificity, sensitivity, and dice coefficient of the proposed model were 93.04%, 96.00%, 74.78%, and 73.71%, respectively, which were all higher than those of the mainstream segmentation model.ConclusionThe CLS-Model proposed in this paper has good performance in the segmentation of cervical lesions in colposcopic post-acetic-acid images and can better assist colposcopists in improving the diagnostic level.
Objective: Cervical cancer is one of the two biggest killers of women and early detection of cervical precancerous lesions can effectively improve the survival rate of patients. Manual diagnosis by combining colposcopic images and clinical examination results is the main clinical diagnosis method at present. Developing an intelligent diagnosis algorithm based on artificial intelligence is an inevitable trend to solve the objectification of diagnosis and improve the quality and efficiency of diagnosis. Approach: A colposcopic multimodal fusion convolutional neural network (CMF-CNN) was proposed for the classification of cervical lesions. Mask region convolutional neural network was used to detect the cervical region while the encoding network EfficientNet-B3 was introduced to extract the multimodal image features from the acetic image and iodine image. Finally, Squeeze-and-Excitation, Atrous Spatial Pyramid Pooling, and convolution block were also adopted to encode and fuse the patient's clinical text information. Main results: The experimental results showed that in 7106 cases of colposcopy, the accuracy, macro F1-score, macro-areas under the curve of the proposed model were 92.70%, 92.74%, 98.56%, respectively. They are superior to the mainstream unimodal image classification models. Significance: CMF-CNN proposed in this paper combines multimodal information, which has high performance in the classification of cervical lesions in colposcopy, so it can provide comprehensive diagnostic aid.
BACKGROUND: Colposcopy is one of the common methods of cervical cancer screening. The type of cervical transformation zone is considered one of the important factors for grading colposcopic findings and choosing treatment. OBJECTIVE: This study aims to develop a deep learning-based method for automatic classification of cervical transformation zone from colposcopy images. METHODS: We proposed a multiscale feature fusion classification network to classify cervical transformation zone, which can extract features from images and fuse them at multiple scales. Cervical regions were first detected from original colposcopy images and then fed into our multiscale feature fusion classification network. RESULTS: The results on the test dataset showed that, compared with the state-of-the-art image classification models, the proposed classification network had the highest classification accuracy, reaching 88.49%, and the sensitivity to type 1, type 2 and type 3 were 90.12%, 85.95% and 89.45%, respectively, higher than the comparison methods. CONCLUSIONS: The proposed method can automatically classify cervical transformation zone in colposcopy images, and can be used as an auxiliary tool in cervical cancer screening.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.