2021
DOI: 10.1055/a-1372-2789
|View full text |Cite
|
Sign up to set email alerts
|

Expert-level classification of gastritis by endoscopy using deep learning: a multicenter diagnostic trial

Abstract: Background and study aims Endoscopy plays a crucial role in diagnosis of gastritis. Endoscopists have low accuracy in diagnosing atrophic gastritis with white-light endoscopy (WLE). High-risk factors (such as atrophic gastritis [AG]) for carcinogenesis demand early detection. Deep learning (DL)-based gastritis classification with WLE rarely has been reported. We built a system for improving the accuracy of diagnosis of AG with WLE to assist with this common gastritis diagnosis and help lessen endoscopist fatig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 34 publications
0
16
0
Order By: Relevance
“…Five of the eight studies were retrospective ( 38 , 40 42 , 45 ), and three were prospective ( 29 , 43 , 44 ). All eight studies used deep-learning techniques: five used image-classification algorithms ( 38 , 41 , 42 , 44 , 45 ), one used an object-detection algorithm ( 39 ), one used a semantic-segmentation algorithm ( 43 ), and one used a combination of image classification and semantic segmentation ( 40 ). All studies were tested using static image models, and four studies used prospective videos to validate the models further ( 39 , 40 , 43 , 44 ).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Five of the eight studies were retrospective ( 38 , 40 42 , 45 ), and three were prospective ( 29 , 43 , 44 ). All eight studies used deep-learning techniques: five used image-classification algorithms ( 38 , 41 , 42 , 44 , 45 ), one used an object-detection algorithm ( 39 ), one used a semantic-segmentation algorithm ( 43 ), and one used a combination of image classification and semantic segmentation ( 40 ). All studies were tested using static image models, and four studies used prospective videos to validate the models further ( 39 , 40 , 43 , 44 ).…”
Section: Resultsmentioning
confidence: 99%
“…All eight studies used deep-learning techniques: five used image-classification algorithms ( 38 , 41 , 42 , 44 , 45 ), one used an object-detection algorithm ( 39 ), one used a semantic-segmentation algorithm ( 43 ), and one used a combination of image classification and semantic segmentation ( 40 ). All studies were tested using static image models, and four studies used prospective videos to validate the models further ( 39 , 40 , 43 , 44 ). Regarding the type of endoscopy, four studies included only normal white-light endoscopy ( 38 , 40 42 ), three used enhanced endoscopy ( 39 , 44 , 45 ), and one did not specify the type of endoscopy ( 43 ).…”
Section: Resultsmentioning
confidence: 99%
“…DenseNet121 was trained with 5470 images 3 to characterise AG, resulting in an accuracy of 94.2% on the test dataset. similarly, UNet++ with ResNet50 was used to classify AG, IM and haemorrhage 41 . For this, 8141 images (4587 patients) were used for training, while 258 external patients and 80 videos were used for the testing stage.…”
Section: Methodsmentioning
confidence: 99%
“… Target disease Dataset Method Outcome Similar studies OGD O WL BE Train: 494,364 Test: 1704 (669 patients) Classification a 1 —Neoplasia vs NDBE (hybrid ResNet-UNet) (DS 4) sensitivity: 90%, specificity: 88%, accuracy: 89% (DS 5) sensitivity: 93%, specificity: 83%, accuracy: 88% Ebigbo et al 2 (ResNet100) OGD O NBI SCC Train: 6473 images Test: 6671 images and 80 videos Segmentation 39 (SegNet) (Per-image) sensitivity: 98.04%, specificity: 95.03% (Per-frame) sensitivity: 91.5%, specificity: 99.9% Nakagawa et al 116 , Sho et al 117 (SSD) Everson et al 5 (Deep supervision) OGD S WLI AG 5470 images Train: 70% Test: 30% Classification 3 (DenseNet121) Sensitivity: 94.5%, specificity: 94%, accuracy: 94.2% Guimarães et al 4 (VGG16) OGD S WLI AG, IM, erosion and hem. Train: 7326 images Val: 815 images Test: 570 images, 258 external test and 80 videos Classification a 41 (UNet++, ResNet50) Accuracy (non AG/AG, atrophy/IM, and erosion/haemorrhage): 88.78%, 87.40% and 93.67% (int. test), 91.23%, 85.81% and 92.70% (ext.…”
Section: Introductionmentioning
confidence: 99%
“…AI also showed excellent ability in CAG diagnosis. For example, Guimarães et al (20) and Mu et al (21) automatically extracted endoscopic image features to identify CAG by DL techniques with an accuracy of 93% and 95%, respectively. CAG is endoscopically visible as a red-white mucosal, predominantly white, exposed section of the mucosal blood vessels, and it can be accompanied by mucosal granules or nodules (22,23).…”
Section: Introductionmentioning
confidence: 99%