Background and study aims: Multiple computer-aided systems for polyp detection (CADe) are currently introduced into clinical practice, with an unclear effect on examiner behavior. In this work, we aimed to measure the influence of a CADe system on reaction time, mucosa misinterpretations, and changes in visual gaze pattern. Patients and methods: Participants with variable levels of experience in colonoscopy examined video sequences while eye movement was tracked. Using a crossover design, videos were presented in two assessments with and without CADe (GI Genius, Medtronic) support. Reaction time for polyp detection and eye-tracking metrics were evaluated. Results: 21 Participants performed 1218 experiments. CADe was with a median of 1.16sec significantly faster in detecting polyps compared to the users with 2.97sec (99%CI;0.40-3.43 and 2.53-3.77sec, respectively). However, the reaction time of the user with the use of CADe with a median of 2.9sec (99%CI;2.55-3.38sec) was similar than without its use. CADe increased the misinterpretations of normal mucosa and reduced the eye travel distance. Conclusions: This work confirms that CADe systems detect polyps faster than humans. In addition, they led to increased misinterpretations of normal mucosa and decreased eye travel distance. Possible consequences of these findings might be prolonged examination time and deskilling.
Background Artificial intelligence (AI) using deep learning methods for polyp detection (CADe) and characterization (CADx) is on the verge of clinical application. CADe already implied its potential use in randomized controlled trials. Further efforts are needed to take CADx to the next level of development. Aim This work aims to give an overview of the current status of AI in colonoscopy, without going into too much technical detail. Methods A literature search to identify important studies exploring the use of AI in colonoscopy was performed. Results This review focuses on AI performance in screening colonoscopy summarizing the first prospective trials for CADe, the state of research in CADx as well as current limitations of those systems and legal issues.
Background Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Methods In our framework, an expert reviews the video and annotates a few video frames to verify the object’s annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. Results Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.
Purpose Computer-aided polyp detection (CADe) systems for colonoscopy are already presented to increase adenoma detection rate (ADR) in randomized clinical trials. Those commercially available closed systems often do not allow for data collection and algorithm optimization, for example regarding the usage of different endoscopy processors. Here, we present the first clinical experiences of a, for research purposes publicly available, CADe system. Methods We developed an end-to-end data acquisition and polyp detection system named EndoMind. Examiners of four centers utilizing four different endoscopy processors used EndoMind during their clinical routine. Detected polyps, ADR, time to first detection of a polyp (TFD), and system usability were evaluated (NCT05006092). Results During 41 colonoscopies, EndoMind detected 29 of 29 adenomas in 66 of 66 polyps resulting in an ADR of 41.5%. Median TFD was 130 ms (95%-CI, 80–200 ms) while maintaining a median false positive rate of 2.2% (95%-CI, 1.7–2.8%). The four participating centers rated the system using the System Usability Scale with a median of 96.3 (95%-CI, 70–100). Conclusion EndoMind’s ability to acquire data, detect polyps in real-time, and high usability score indicate substantial practical value for research and clinical practice. Still, clinical benefit, measured by ADR, has to be determined in a prospective randomized controlled trial.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.