Capsule endoscopy (CE) is a widely used, minimally invasive alternative to traditional endoscopy that allows visualisation of the entire small intestine. Patient preparation can help to obtain a cleaner intestine and thus better visibility in the resulting videos. However, studies on the most effective preparation method are conflicting due to the absence of objective, automatic cleanliness evaluation methods. In this work, we aim to provide such a method capable of presenting results on an intuitive scale, with a relatively light-weight novel convolutional neural network architecture at its core. We trained our model using 5-fold cross-validation on an extensive data set of over 50,000 image patches, collected from 35 different CE procedures, and compared it with state-of-the-art classification methods. From the patch classification results, we developed a method to automatically estimate pixel-level probabilities and deduce cleanliness evaluation scores through automatically learnt thresholds. We then validated our method in a clinical setting on 30 newly collected CE videos, comparing the resulting scores to those independently assigned by human specialists. We obtained the highest classification accuracy for the proposed method (95.23%), with significantly lower average prediction times than for the second-best method. In the validation of our method, we found acceptable agreement with two human specialists compared to interhuman agreement, showing its validity as an objective evaluation method.
Wireless Capsule Endoscopy is a technique that allows for observation of the entire gastrointestinal tract in an easy and non-invasive way. However, its greatest limitation lies in the time required to analyze the large number of images generated in each examination for diagnosis, which is about 2 hours. This causes not only a high cost, but also a high probability of a wrong diagnosis due to the physician's fatigue, while the variable appearance of abnormalities requires continuous concentration. In this work, we designed and developed a system capable of automatically detecting blood based on classification of extracted regions, following two different classification approaches. The first method consisted in extraction of hand-crafted features that were used to train machine learning algorithms, specifically Support Vector Machines and Random Forest, to create models for classifying images as healthy tissue or blood. The second method consisted in applying deep learning techniques, concretely convolutional neural networks, capable of extracting the relevant features of the image by themselves. The best results (95.7% sensitivity and 92.3% specificity) were obtained for a Random Forest model trained with features extracted from the histograms of the three HSV color space channels. For both methods we extracted square patches of several sizes using a sliding window, while for the first approach we also implemented the waterpixels technique in order to improve the classification results.
vii ents. De tal manera, aconseguim una precisió del 86,55% i una recuperació del 88,79% en el nostre conjunt de dades de test. Ampliant aquest objectiu, també pretenem visualitzar la motilitat intestinal d'una manera anàloga a una manometría intestinal tradicional, basada únicament en la tècnica mínimament invasiva de CE. Per a això, alineem els fotogrames amb similar orientació i derivem els paràmetres adequats per al nostre mètode de segmentació de les propietats del rectangle delimitador del túnel. Finalment, calculem la grandària relativa del túnel per a construir un equivalent d'una manometría intestinal a partir d'informació visual.Des que concloem el nostre treball, el nostre mètode per a l'avaluació automàtica de la neteja s'ha utilitzat en un estudi a gran escala encara en curs, en el qual participem activament. Mentre gran part de la investigació se centra en la detecció automàtica de patologies, com a tumors, pòlips i hemorràgies, esperem que el nostre treball puga fer una contribució significativa per a extraure més informació de la CE també en altres àrees sovint subestimades.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.