Training deep neural networks usually requires a large amount of labeled data to obtain good performance. However, in medical image analysis, obtaining high-quality labels for the data is laborious and expensive, as accurately annotating medical images demands expertise knowledge of the clinicians. In this paper, we present a novel relation-driven semi-supervised framework for medical image classification. It is a consistencybased method which exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations, and leverages a self-ensembling model to produce high-quality consistency targets for the unlabeled data. Considering that human diagnosis often refers to previous analogous cases to make reliable decisions, we introduce a novel sample relation consistency (SRC) paradigm to effectively exploit unlabeled data by modeling the relationship information among different samples. Superior to existing consistency-based methods which simply enforce consistency of individual predictions, our framework explicitly enforces the consistency of semantic relation among different samples under perturbations, encouraging the model to explore extra semantic information from unlabeled data. We have conducted extensive experiments to evaluate our method on two public benchmark medical image classification datasets, i.e., skin lesion diagnosis with ISIC 2018 challenge and thorax disease classification with ChestX-ray14. Our method outperforms many state-of-the-art semi-supervised learning methods on both singlelabel and multi-label image classification scenarios.
Background Spectral-domain optical coherence tomography (SDOCT) can be used to detect glaucomatous optic neuropathy, but human expertise in interpretation of SDOCT is limited. We aimed to develop and validate a three-dimensional (3D) deep-learning system using SDOCT volumes to detect glaucomatous optic neuropathy. MethodsWe retrospectively collected a dataset including 4877 SDOCT volumes of optic disc cube for training (60%), testing (20%), and primary validation (20%) from electronic medical and research records at the Chinese University of Hong Kong Eye Centre (Hong Kong, China) and the Hong Kong Eye Hospital (Hong Kong, China). Residual network was used to build the 3D deep-learning system. Three independent datasets (two from Hong Kong and one from Stanford, CA, USA), including 546, 267, and 1231 SDOCT volumes, respectively, were used for external validation of the deep-learning system. Volumes were labelled as having or not having glaucomatous optic neuropathy according to the criteria of retinal nerve fibre layer thinning on reliable SDOCT images with position-correlated visual field defect. Heatmaps were generated for qualitative assessments. Findings 6921 SDOCT volumes from 1 384 200 two-dimensional cross-sectional scans were studied. The 3D deeplearning system had an area under the receiver operation characteristics curve (AUROC) of 0•969 (95% CI 0•960-0•976), sensitivity of 89% (95% CI 83-93), specificity of 96% (92-99), and accuracy of 91% (89-93) in the primary validation, outperforming a two-dimensional deep-learning system that was trained on en face fundus images (AUROC 0•921 [0•905-0•937]; p<0•0001). The 3D deep-learning system performed similarly in the external validation datasets, with AUROCs of 0•893-0•897, sensitivities of 78-90%, specificities of 79-86%, and accuracies of 80-86%. The heatmaps of glaucomatous optic neuropathy showed that the learned features by the 3D deep-learning system used for detection of glaucomatous optic neuropathy were similar to those used by clinicians.Interpretation The proposed 3D deep-learning system performed well in detection of glaucomatous optic neuropathy in both primary and external validations. Further prospective studies are needed to estimate the incremental costeffectiveness of incorporation of an artificial intelligence-based model for glaucoma screening.
Background The usefulness of 3D deep learning‐based classification of breast cancer and malignancy localization from MRI has been reported. This work can potentially be very useful in the clinical domain and aid radiologists in breast cancer diagnosis. Purpose To evaluate the efficacy of 3D deep convolutional neural network (CNN) for diagnosing breast cancer and localizing the lesions at dynamic contrast enhanced (DCE) MRI data in a weakly supervised manner. Study Type Retrospective study. Subjects A total of 1537 female study cases (mean age 47.5 years ±11.8) were collected from March 2013 to December 2016. All the cases had labels of the pathology results as well as BI‐RADS categories assessed by radiologists. Field Strength/Sequence 1.5 T dynamic contrast‐enhanced MRI. Assessment Deep 3D densely connected networks were trained under image‐level supervision to automatically classify the images and localize the lesions. The dataset was randomly divided into training (1073), validation (157), and testing (307) subsets. Statistical Tests Accuracy, sensitivity, specificity, area under receiver operating characteristic curve (ROC), and the McNemar test for breast cancer classification. Dice similarity for breast cancer localization. Results The final algorithm performance for breast cancer diagnosis showed 83.7% (257 out of 307) accuracy (95% confidence interval [CI]: 79.1%, 87.4%), 90.8% (187 out of 206) sensitivity (95% CI: 80.6%, 94.1%), 69.3% (70 out of 101) specificity (95% CI: 59.7%, 77.5%), with the area under the curve ROC of 0.859. The weakly supervised cancer detection showed an overall Dice distance of 0.501 ± 0.274. Data Conclusion 3D CNNs demonstrated high accuracy for diagnosing breast cancer. The weakly supervised learning method showed promise for localizing lesions in volumetric radiology images with only image‐level labels. Level of Evidence: 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1144–1151.
Deep learning approaches have demonstrated remarkable progress in automatic Chest X-ray analysis. The datadriven feature of deep models requires training data to cover a large distribution. Therefore, it is substantial to integrate knowledge from multiple datasets, especially for medical images. However, learning a disease classification model with extra Chest X-ray (CXR) data is yet challenging. Recent researches have demonstrated that performance bottleneck exists in joint training on different CXR datasets, and few made efforts to address the obstacle. In this paper, we argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges. Specifically, the imperfect data is in two folds: domain discrepancy, as the image appearances vary across datasets; and label discrepancy, as different datasets are partially labeled. To this end, we formulate the multi-label thoracic disease classification problem as weighted independent binary tasks according to the categories. For common categories shared across domains, we adopt task-specific adversarial training to alleviate the feature differences. For categories existing in a single dataset, we present uncertainty-aware temporal ensembling of model predictions to mine the information from the missing labels further. In this way, our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability. We conduct extensive experiments on three datasets with more than 360,000 Chest X-ray images. Our method outperforms other competing models and sets state-ofthe-art performance on the official NIH test set with 0.8349 AUC, demonstrating its effectiveness of utilizing the external dataset to improve the internal classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.