backgroundIn general, academic but not community endoscopists have demonstrated adequate endoscopic differentiation accuracy to make the 'resect and discard' paradigm for diminutive colorectal polyps workable. Computer analysis of video could potentially eliminate the obstacle of interobserver variability in endoscopic polyp interpretation and enable widespread acceptance of 'resect and discard'. study design and methods We developed an artificial intelligence (AI) model for real-time assessment of endoscopic video images of colorectal polyps. A deep convolutional neural network model was used. Only narrow band imaging video frames were used, split equally between relevant multiclasses. Unaltered videos from routine exams not specifically designed or adapted for AI classification were used to train and validate the model. The model was tested on a separate series of 125 videos of consecutively encountered diminutive polyps that were proven to be adenomas or hyperplastic polyps. results The AI model works with a confidence mechanism and did not generate sufficient confidence to predict the histology of 19 polyps in the test set, representing 15% of the polyps. For the remaining 106 diminutive polyps, the accuracy of the model was 94% (95% CI 86% to 97%), the sensitivity for identification of adenomas was 98% (95% CI 92% to 100%), specificity was 83% (95% CI 67% to 93%), negative predictive value 97% and positive predictive value 90%. conclusions An AI model trained on endoscopic video can differentiate diminutive adenomas from hyperplastic polyps with high accuracy. Additional study of this programme in a live patient clinical trial setting to address resect and discard is planned. IntroductIonEndoscopists combine their knowledge of the spectrum of endoscopic appearances of precancerous lesions with meticulous mechanical exploration and cleaning of mucosal surfaces to maximise lesion detection during colonoscopy. An extension of detection is endoscopic prediction of lesion histology, including differentiation of precancerous lesions from non-neoplastic lesions, and prediction of deep submucosal invasion of cancer.1 2 Image analysis can guide whether lesion removal is necessary and direct an endoscopist to the best resection method. 1-3Image analysis during colonoscopy has achieved increasing acceptance as a means to accurately predict the histology of diminutive lesions, 4 5 which have minimal risk of cancer, 6 so that these diminutive lesions could be resected and discarded without pathological assessment or left in place without resection in the case of diminutive distal colon hyperplastic polyps.3 Discarding most diminutive lesions without pathological assessment has the potential for large cost saving with minimal risk.
We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem.
Background & Aims Lack of clinical validation and inter-observer variability are two limitations of endoscopic assessment and scoring of disease severity in patients with Ulcerative Colitis. We developed a deep learning (DL) model to improve, accelerate and automate UC detection, and predict the Mayo Endoscopic Subscore (MES) and the Ulcerative Colitis Endoscopic Index of Severity (UCEIS). Methods A total of 134 prospective videos (1,550,030 frames) were collected and those with poor quality were excluded. The frames were labeled by experts based on MES and UCEIS scores. The scored frames were used to create a preprocessing pipeline and train multiple convolutional neural networks (CNNs) with proprietary algorithms in order to filter, detect and assess all frames. These frames served as the input for the DL model, with the output being continuous scores for MES and UCEIS (and its components). A graphical user interface was developed to support both labeling video sections and displaying the predicted disease severity assessment by the AI from endoscopic recordings. Results Mean absolute error (MAE) and mean bias were used to evaluate the distance of continuous model's predictions from ground truth and its possible tendency to over/under-predict were excellent for MES and UCEIS. The quadratic weighted kappa used to compare the inter-rater agreement between experts’ labels and the model's predictions showed strong agreement (0.87, 0.88 frame-level, 0.88, 0.90 section-level and 0.90, 0.78 at video-level, for MES and UCEIS, respectively). Conclusions We present the first fully automated tool that improves the accuracy of the MES and UCEIS, reduces the time between video collection and review, and improves subsequent quality assurance and scoring.
Background Computer vision & deep learning(DL)to assess & help with tissue characterization of disease activity in Ulcerative Colitis(UC)through Mayo Endoscopic Subscore(MES)show good results in central reading for clinical trials.UCEIS(Ulcerative Colitis Endoscopic Index of Severity)being a granular index,may be more reflective of disease activity & more primed for artificial intelligence(AI). We set out to create UC detection & scoring,in a single tool & graphic user interface(GUI),improving accuracy & precision of MES & UCEIS scores & reducing the time elapsed between video collection,quality assurance & final scoring.We apply DL models to detect & filter scorable frames,assess quality of endoscopic recordings & predict MES & UCEIS scores in videos of patients with UC Methods We leveraged>375,000frames from endoscopy cases using Olympus scopes(190&180Series).Experienced endoscopists & 9 labellers tagged~22,000(6%)images showing normal, disease state(MES orUCEIS subscores)& non-scorable frames.We separate total frames in 3 categories:training(60%),testing(20%)&validation(20%).Using a Convolutional Neural Network(CNN)Inception V3,including a biopsy & post-biopsy detector,an out-of-the-body framework & blue light algorithm.Similar architecture for detection with multiple separate units & corresponding dense layers taking CNN to provide continuous scores for 5 separate outputs:MES,aggregate UCEIS & individual components Vascular Pattern,Bleeding & Ulcers. Results Multiple metrics evaluate detection models.Overall performance has an accuracy of~88% & a similar precision & recall for all classes. MAE(distance from ground truth)& mean bias(over/under-prediction tendency)are used to assess the performance of the scoring model.Our model performs well as predicted distributions are relatively close to the labelled,ground truth data & MAE & Bias for all frames are relatively low considering the magnitude of the scoring scale. To leverage all our models,we developed a practical tool that should be used to improve efficiency & accuracy of reading & scoring process for UC at different stages of the clinical journey. Conclusion We propose a DL approach based on labelled images to automate a workflow for improving & accelerating UC disease detection & scoring using MES & UCEIS scores. Our deep learning model shows relevant feature identification for scoring disease activity in UC patients, well aligned with both scoring guidelines,performance of experts & demonstrates strong promise for generalization.Going forward, we aim to continue developing our detection & scoring tool. With our detailed workflow supported by deep learning models, we have a driving function to create a precise & potentially superhuman level AI to score disease activity
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.