Background and Objectives: Device-assisted enteroscopy (DAE) has a significant role in approaching enteric lesions. Endoscopic observation of ulcers or erosions is frequent and can be associated with many nosological entities, namely Crohn’s disease. Although the application of artificial intelligence (AI) is growing exponentially in various imaged-based gastroenterology procedures, there is still a lack of evidence of the AI technical feasibility and clinical applicability of DAE. This study aimed to develop and test a multi-brand convolutional neural network (CNN)-based algorithm for automatically detecting ulcers and erosions in DAE. Materials and Methods: A unicentric retrospective study was conducted for the development of a CNN, based on a total of 250 DAE exams. A total of 6772 images were used, of which 678 were considered ulcers or erosions after double-validation. Data were divided into a training and a validation set, the latter being used for the performance assessment of the model. Our primary outcome measures were sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), and an area under the curve precision–recall curve (AUC-PR). Results: Sensitivity, specificity, PPV, and NPV were respectively 88.5%, 99.7%, 96.4%, and 98.9%. The algorithm’s accuracy was 98.7%. The AUC-PR was 1.00. The CNN processed 293.6 frames per second, enabling AI live application in a real-life clinical setting in DAE. Conclusion: To the best of our knowledge, this is the first study regarding the automatic multi-brand panendoscopic detection of ulcers and erosions throughout the digestive tract during DAE, overcoming a relevant interoperability challenge. Our results highlight that using a CNN to detect this type of lesion is associated with high overall accuracy. The development of binary CNN for automatically detecting clinically relevant endoscopic findings and assessing endoscopic inflammatory activity are relevant steps toward AI application in digestive endoscopy, particularly for panendoscopic evaluation.
Background and objectives: Capsule endoscopy (CE) is a non-invasive method to inspect the small bowel that, like other enteroscopy methods, requires adequate small-bowel cleansing to obtain conclusive results. Artificial intelligence (AI) algorithms have been seen to offer important benefits in the field of medical imaging over recent years, particularly through the adaptation of convolutional neural networks (CNNs) to achieve more efficient image analysis. Here, we aimed to develop a deep learning model that uses a CNN to automatically classify the quality of intestinal preparation in CE. Methods: A CNN was designed based on 12,950 CE images obtained at two clinical centers in Porto (Portugal). The quality of the intestinal preparation was classified for each image as: excellent, ≥90% of the image surface with visible mucosa; satisfactory, 50–90% of the mucosa visible; and unsatisfactory, <50% of the mucosa visible. The total set of images was divided in an 80:20 ratio to establish training and validation datasets, respectively. The CNN prediction was compared with the classification established by consensus of a group of three experts in CE, currently considered the gold standard to evaluate cleanliness. Subsequently, how the CNN performed in diagnostic terms was evaluated using an independent validation dataset. Results: Among the images obtained, 3633 were designated as unsatisfactory preparation, 6005 satisfactory preparation, and 3312 with excellent preparation. When differentiating the classes of small-bowel preparation, the algorithm developed here achieved an overall accuracy of 92.1%, with a sensitivity of 88.4%, a specificity of 93.6%, a positive predictive value of 88.5%, and a negative predictive value of 93.4%. The area under the curve for the detection of excellent, satisfactory, and unsatisfactory classes was 0.98, 0.95, and 0.99, respectively. Conclusions: A CNN-based tool was developed to automatically classify small-bowel preparation for CE, and it was seen to accurately classify intestinal preparation for CE. The development of such a system could enhance the reproducibility of the scales used for such purposes.
Introduction – Capsule endoscopy (CE) is a minimally invasive exam for evaluating the gastrointestinal tract. However, it’s diagnostic yield for detecting gastric lesions is suboptimal. Convolutional Neural Networks (CNN) are artificial intelligence models with great performance for image analysis. Nonetheless, their role in gastric evaluation by wireless CE (WCE) has not been explored. Methods – Our group developed a CNN-based algorithm for the automatic classification of pleomorphic gastric lesions, including vascular lesions (angiectasia, varices and red spots), protruding lesions, ulcers and erosions. 12918 gastric images from three different CE devices (PillCam Crohn’s; PillCam SB3; OMOM HD capsule endoscopy system) were used from the construction of the CNN: 1407 from protruding lesions; 994 from ulcers and erosions; 822 from vascular lesions and 2851 from hematic residues, the remaining images from normal mucosa. The images were divided in a training (split for 3-fold cross validation) and validation dataset. The model’s output was compared to a consensus classification by two WCE experienced gastroenterologists. The networks’ performance was evaluated by its sensitivity, specificity, accuracy, positive predictive (PPV) and negative predictive values (NPV), and area under the precision-recall curve (AUPRC). Results – The trained CNN had a 97.4% sensitivity, 95.9% specificity, PPV and NPV of 95.0% and 97.8% for gastric lesions, with 96.6% overall accuracy. The CNN had an image processing time of 115 images per second. Conclusions – Our group developed, for the first time, a CNN capable of automatically detecting pleomorphic gastric lesions in both small bowel and colon capsule endoscopy devices.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.