Differential diagnosis of focal pancreatic masses is based on endoscopic ultrasound (EUS) guided fine needle aspiration biopsy (EUS-FNA/FNB). Several imaging techniques (i.e. gray-scale, color Doppler, contrast-enhancement and elastography) are used for differential diagnosis. However, diagnosis remains highly operator dependent. To address this problem, machine learning algorithms (MLA) can generate an automatic computer-aided diagnosis (CAD) by analyzing a large number of clinical images in real-time. We aimed to develop a MLA to characterize focal pancreatic masses during the EUS procedure. The study included 65 patients with focal pancreatic masses, with 20 EUS images selected from each patient (grayscale, color Doppler, arterial and venous phase contrast-enhancement and elastography). Images were classified based on cytopathology exam as: chronic pseudotumoral pancreatitis (CPP), neuroendocrine tumor (PNET) and ductal adenocarcinoma (PDAC). The MLA is based on a deep learning method which combines convolutional (CNN) and long short-term memory (LSTM) neural networks. 2688 images were used for training and 672 images for testing the deep learning models. The CNN was developed to identify the discriminative features of images, while a LSTM neural network was used to extract the dependencies between images. The model predicted the clinical diagnosis with an area under curve index of 0.98 and an overall accuracy of 98.26%. The negative (NPV) and positive (PPV) predictive values and the corresponding 95% confidential intervals (CI) are 96.7%, [94.5, 98.9] and 98.1%, [96.81, 99.4] for PDAC, 96.5%, [94.1, 98.8], and 99.7%, [99.3, 100] for CPP, and 98.9%, [97.5, 100] and 98.3%, [97.1, 99.4] for PNET. Following further validation on a independent test cohort, this method could become an efficient CAD tool to differentiate focal pancreatic masses in real-time.
Aim: In this paper we proposed different architectures of convolutional neural network (CNN) to classify fatty liver disease in images using only pixels and diagnosis labels as input. We trained and validated our models using a dataset of 629 images consisting of 2 types of liver images, normal and liver steatosis. Material and methods: We assessed two pre-trained models of convolutional neural networks, Inception-v3 and VGG-16 using fine-tuning. Both models were pre-trained on ImageNet dataset to extract features from B-mode ultrasound liver images. The results obtained through these methods were compared for selecting the predictive model with the best performance metrics. We trained the two models using a dataset of 262 images of liver steatosis and 234 images of normal liver. We assessed the models using a dataset of 70 liver steatosis im-ages and 63 normal liver images. Results. The proposed model that used Inception v3 obtained a 93.23% test accuracy with a sensitivity of 89.9%% and a precision of 96.6%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.93. The other proposed model that used VGG-16, obtained a 90.77% test accuracy with a sensitivity of 88.9% and a precision of 92.85%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.91. Conclusion. The deep learning algorithms that we proposed to detect steatosis and classify the images in normal and fatty liver images, yields an excellent test performance of over 90%. However, future larger studies are required in order to establish how these algorithms can be implemented in a clinical setting.
Background and Objectives: At present, thyroid disorders have a great incidence in the worldwide population, so the development of alternative methods for improving the diagnosis process is necessary. Materials and Methods: For this purpose, we developed an ensemble method that fused two deep learning models, one based on convolutional neural network and the other based on transfer learning. For the first model, called 5-CNN, we developed an efficient end-to-end trained model with five convolutional layers, while for the second model, the pre-trained VGG-19 architecture was repurposed, optimized and trained. We trained and validated our models using a dataset of ultrasound images consisting of four types of thyroidal images: autoimmune, nodular, micro-nodular, and normal. Results: Excellent results were obtained by the ensemble CNN-VGG method, which outperformed the 5-CNN and VGG-19 models: 97.35% for the overall test accuracy with an overall specificity of 98.43%, sensitivity of 95.75%, positive and negative predictive value of 95.41%, and 98.05%. The micro average areas under each receiver operating characteristic curves was 0.96. The results were also validated by two physicians: an endocrinologist and a pediatrician. Conclusions: We proposed a new deep learning study for classifying ultrasound thyroidal images to assist physicians in the diagnosis process.
Background and Aims: Mucosal healing (MH) is associated with a stable course of Crohn’s disease (CD) which can be assessed by confocal laser endomicroscopy (CLE). To minimize the operator’s errors and automate assessment of CLE images, we used a deep learning (DL) model for image analysis. We hypothesized that DL combined with convolutional neural networks (CNNs) and long short-term memory (LSTM) can distinguish between normal and inflamed colonic mucosa from CLE images. Methods: The study included 54 patients, 32 with known active CD, and 22 control patients (18 CD patients with MH and four normal mucosa patients with no history of inflammatory bowel diseases). We designed and trained a deep convolutional neural network to detect active CD using 6,205 endomicroscopy images classified as active CD inflammation (3,672 images) and control mucosal healing or no inflammation (2,533 images). CLE imaging was performed on four colorectal areas and the terminal ileum. Gold standard was represented by the histopathological evaluation. The dataset was randomly split in two distinct training and testing datasets: 80% data from each patient were used for training and the remaining 20% for testing. The training dataset consists of 2,892 images with inflammation and 2,189 control images. The testing dataset consists of 780 images with inflammation and 344 control images of the colon. We used a CNN-LSTM model with four convolution layers and one LSTM layer for automatic detection of MH and CD diagnosis from CLE images. Results: CLE investigation reveals normal colonic mucosa with round crypts and inflamed mucosa with irregular crypts and tortuous and dilated blood vessels. Our method obtained a 95.3% test accuracy with a specificity of 92.78% and a sensitivity of 94.6%, with an area under each receiver operating characteristic curves of 0.98. Conclusions: Using machine learning algorithms on CLE images can successfully differentiate between inflammation and normal ileocolonic mucosa and can be used as a computer aided diagnosis for CD. Future clinical studies with a larger patient spectrum will validate our results and improve the CNN-SSTM model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.