Automatic and accurate esophageal lesion classification and segmentation is of great significance to clinically estimate the lesion status of esophageal disease and make suitable diagnostic schemes. Due to individual variations and visual similarities of lesions in shapes, colors and textures, current clinical methods remain subject to potential high-risk and time-consumption issues. In this paper, we propose an Esophageal Lesion Network (ELNet) for automatic esophageal lesion classification and segmentation using deep convolutional neural networks (DCNNs). The underlying method automatically integrates dual-view contextual lesion information to extract global features and local features for esophageal lesion classification of four esophageal image types (Normal, Inflammation, Barrett, and Cancer) and proposes lesion-specific segmentation network for automatic esophageal lesion annotation of three esophageal lesion types at pixel level. For established clinical large-scale database of 1051 white-light endoscopic images, ten-fold cross-validation is used in method validation. Experiment results show that the proposed framework achieves classification with sensitivity of 0.9034, specificity of 0.9718 and accuracy of 0.9628, and the segmentation with sensitivity of 0.8018, specificity of 0.9655 and accuracy of 0.9462. All of these indicate that our method enables an efficient, accurate and reliable esophageal lesion diagnosis in clinical.The main contributions of our work can be generalized as follows: 1 For the first time, proposed ELNet enables an automatically and reliably comprehensive esophageal lesions classification of four esophageal lesion types (Normal, Inflammation, Barrett, and Cancer) and lesion-specific segmentation from clinically white-light esophageal images to make suitable and repaid diagnostic schemes for clinicians. 2 A novel Dual-Stream network (DSN) is proposed for esophageal lesion classification. DSN automatically integrates dual-view contextual lesion information using two CNN streams to complementarily extract the global features from the holistic esophageal images and the local features from the lesion patches. 3 Lesion-specific esophageal lesion annotation with Segmentation Network with Classification (SNC) strategy is proposed to automatically annotate three lesion types (Inflammation, Barrett, Cancer) at pixel level to reduce the intra-class differences of esophageal lesions. 4 A clinically large-scale database esophageal database is established for esophageal lesions classification and segmentation. This database includes 1051 white-light esophageal images, which consists of endoscopic images in four different lesion types. Each image in this database has a classification label and its corresponding segmentation annotation.
Diabetic retinopathy (DR) is the most common eye complication of diabetes and one of the leading causes of blindness and vision impairment. Automated and accurate DR grading is of great significance for the timely and effective treatment of fundus diseases. Current clinical methods remain subject to potential time-consumption and high-risk. In this paper, a hierarchically Coarse-to-fine network (CF-DRNet) is proposed as an automatic clinical tool to classify five stages of DR severity grades using convolutional neural networks (CNNs). The CF-DRNet conforms to the hierarchical characteristic of DR grading and effectively improves the classification performance of five-class DR grading, which consists of the following: (1) The Coarse Network performs two-class classification including No DR and DR, where the attention gate module highlights the salient lesion features and suppresses irrelevant background information. (2) The Fine Network is proposed to classify four stages of DR severity grades of the grade DR from the Coarse Network including mild, moderate, severe non-proliferative DR (NPDR) and proliferative DR (PDR). Experimental results show that proposed CF-DRNet outperforms some stateof-art methods in the publicly available IDRiD and Kaggle fundus image datasets. These results indicate our method enables an efficient and reliable DR grading diagnosis in clinic.
Dual-energy computed tomography (DECT) is of great significance for clinical practice due to its huge potential to provide material-specific information. However, DECT scanners are usually more expensive than standard singleenergy CT (SECT) scanners and thus are less accessible to undeveloped regions. In this paper, we show that the energy-domain correlation and anatomical consistency between standard DECT images can be harnessed by a deep learning model to provide high-performance DECT imaging from fully-sampled low-energy data together with singleview high-energy data. We demonstrate the feasibility of the approach with two independent cohorts (the first cohort including contrast-enhanced DECT scans of 5,753 image slices from 22 patients and the second cohort including spectral CT scans without contrast injection of 2463 image slices from other 22 patients) and show its superior performance on DECT applications. The deep-learning-based approach could be useful to further significantly reduce the radiation dose of current premium DECT scanners and has the potential to simplify the hardware of DECT imaging systems and to enable DECT imaging using standard SECT scanners.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.