Background: Authors previously established deep-learning models to predict the histopathology and invasion-depth of gastric lesions using endoscopic images. This study aimed to establish and validate a deep-learning-based clinical decision support system (CDSS) for the automated detection and classification (diagnosis and invasion-depth prediction) of gastric neoplasms in real-time endoscopy.
Methods: The same 5,017 endoscopic images, which were employed to establish previous models, were used for the training data. The primary outcomes were the 1. Lesion-detection rate for the detection model and 2. Lesion-classification accuracy for the classification model. For the performance validation of lesion-detection model, 2,524 real-time procedures were tested in a randomized pilot study. Consecutive patients were allocated either to CDSS-assisted screening endoscopy or conventional screening endoscopy. The lesion-detection rate was compared between the groups. For the performance validation of lesion-classification model, a prospective multicenter external-test was conducted using 3,976 novel images from five institutions.
Results: The lesion-detection rate was 95.6% (internal-test). For the performance validation, CDSS-assisted endoscopy showed higher lesion-detection rate compared to conventional screening endoscopy, although statistically not significant (2.0% vs. 1.3%, P-value=0.21) (randomized study). The lesion-classification rate was 89.7% in the four-class classification (advanced-, early gastric cancer, dysplasia, and non-neoplasm) and 89.2% in the invasion-depth prediction (mucosa-confined or submucosa-invaded) (internal-test). For the performance validation, CDSS reached 81.5% accuracy in the four-class classification and 86.4% accuracy in the binary classification (prospective multicenter external-test).
Conclusions: The CDSS demonstrated potential for real-clinic application and high performance in terms of lesion detection and classification of detected lesions in the stomach.
Auto-detection of cerebral aneurysms via convolutional neural network (CNN) is being increasingly reported. However, few studies to date have accurately predicted the risk, but not the diagnosis itself. We developed a multi-view CNN for the prediction of rupture risk involving small unruptured intracranial aneurysms (UIAs) based on three-dimensional (3D) digital subtraction angiography (DSA). The performance of a multi-view CNN-ResNet50 in accurately predicting the rupture risk (high vs. non-high) of UIAs in the anterior circulation measuring less than 7 mm in size was compared with various CNN architectures (AlexNet and VGG16), with similar type but different layers (ResNet101 and ResNet152), and single image-based CNN (single-view ResNet50). The sensitivity, specificity, and overall accuracy of risk prediction were estimated and compared according to CNN architecture. The study included 364 UIAs in training and 93 in test datasets. A multi-view CNN-ResNet50 exhibited a sensitivity of 81.82 (66.76–91.29)%, a specificity of 81.63 (67.50–90.76)%, and an overall accuracy of 81.72 (66.98–90.92)% for risk prediction. AlexNet, VGG16, ResNet101, ResNet152, and single-view CNN-ResNet50 showed similar specificity. However, the sensitivity and overall accuracy were decreased (AlexNet, 63.64% and 76.34%; VGG16, 68.18% and 74.19%; ResNet101, 68.18% and 73.12%; ResNet152, 54.55% and 72.04%; and single-view CNN-ResNet50, 50.00% and 64.52%) compared with multi-view CNN-ResNet50. Regarding F1 score, it was the highest in multi-view CNN-ResNet50 (80.90 (67.29–91.81)%). Our study suggests that multi-view CNN-ResNet50 may be feasible to assess the rupture risk in small-sized UIAs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.