Background: Convolutional neural network (CNN) based segmentation methods provide an efficient and automated way for clinicians to assess the structure and function of the heart in cardiac MR images. While CNNs can generally perform the segmentation tasks with high accuracy when training and test images come from the same domain (e.g., same scanner or site), their performance often degrades dramatically on images from different scanners or clinical sites. Methods: We propose a simple yet effective way for improving the network generalization ability by carefully designing data normalization and augmentation strategies to accommodate common scenarios in multi-site, multi-scanner clinical imaging data sets. We demonstrate that a neural network trained on a single-site single-scanner dataset from the UK Biobank can be successfully applied to segmenting cardiac MR images across different sites and different scanners without substantial loss of accuracy. Specifically, the method was trained on a large set of 3,975 subjects from the UK Biobank. It was then directly tested on 600 different subjects from the UK Biobank for intra-domain testing and two other sets for cross-domain testing: the ACDC dataset (100 subjects, 1 site, 2 scanners) and the BSCMR-AS dataset (599 subjects, 6 sites, 9 scanners). Results: The proposed method produces promising segmentation results on the UK Biobank test set which are comparable to previously reported values in the literature, while also performing well on cross-domain test sets, achieving a mean Dice metric of 0.90 for the left ventricle, 0.81 for the myocardium, and 0.82 for the right ventricle on the ACDC dataset; and 0.89 for the left ventricle, 0.83 for the myocardium on the BSCMR-AS dataset. Chen et al. Improving CNN-Based CMR Segmentation's Generalizability Conclusions: The proposed method offers a potential solution to improve CNN-based model generalizability for the cross-scanner and cross-site cardiac MR image segmentation task.
Deep learning is a growing trend in medical image analysis. There are limited data of deep learning techniques applied in Chest X-rays. This paper proposed a deep learning algorithm for cardiothoracic ratio (CTR) calculation in chest X-rays. A fully convolutional neural network was employed to segment chest X-ray images and calculate CTR. CTR values derived from the deep learning model were compared with the reference standard using Bland-Altman analysis and linear correlation graphs, and intra-class correlation (ICC) analyses. Diagnostic performance of the model for the detection of heart enlargement was assessed and compared with other deep learning methods and radiologists. CTR values derived from the deep learning method showed excellent agreement with the reference standard, with mean difference 0.0004 ± 0.0133, 95% limits of agreement −0.0256 to 0.0264. Correlation coefficient between deep learning and reference standard was 0.965 (P < 0.001), and ICC coefficient was 0.982 (95% CI 0.978-0.985) (P < 0.001). Measurement time by deep learning was significantly less than that of the manual method [0.69 (0.69-0.70) VS 25.26 (23.49-27.44) seconds, P < 0.001]. Diagnostic accuracy, specificity, and positive predictive value were comparable between the two methods. However, deep learning showed relatively higher sensitivity and negative predictive value (97.2% vs 91.4%, P = 0.004; and 96.0% vs 89.0%, P = 0.006; respectively) compared with the manual method. Performance of this computer-aided technique was demonstrated to be more reliable, time and labor saving than that of the manual method in CTR calculation.
Background Convolutional neural networks (CNNs), part of deep learning, are used widely for computer vision tasks and in some medical domains, such as mammography interpretation. The application of deep learning to carotid artery imaging is scarce. We investigated the ability of deep learning to correctly classify contrast CT images of the carotid arteries without the need for prior feature selection. Purpose (1) To assess the ability of deep learning to differentiate symptomatic patients (had prior stroke or transient ischaemic attack [TIA]) from asymptomatic patients (no prior stroke/TIA) using contrast CT scans alone. (2) To investigate whether deep learning can further discriminate between culprit and non-culprit carotid arteries in symptomatic patients. (3) To assess the interpretability of the deep learning models. Methods Carotid contrast CT scans of consented research subjects were included in the study. Symptomatic patients had confirmed carotid artery-related ischaemic stroke or TIA in the 7 days before CT imaging, and asymptomatic patients had no prior cerebrovascular events. The dataset comprised 1148 axial symptomatic slices (covering a 3cm area of each carotid artery in 41 patients; 41 culprit and 41 non-culprit carotids) and 700 asymptomatic slices (from the bilateral carotid arteries of 25 patients). The dataset was split such that 75% was used for training and 25% for testing. A 30x30 bounding box was used to create patches of the carotid arteries from these axial slices for use as input to the CNN, a modified VGG16 architecture initialised with ImageNet weights to leverage transfer learning (the application of a model trained in one domain to a different domain) implemented in Python. Data augmentation was applied to the training set and the model was trained for 100 epochs using a cyclic learning rate, the RMSProp optimizer and binary cross-entropy loss. Class activation heatmaps were generated using the GradCAM method to highlight the areas of the image that were most important to the model for making its classification decision. Results The deep learning model was 92% accurate in correctly identifying carotid arteries from symptomatic patients versus those from asymptomatic patients. Discriminating between culprit versus non-culprit carotid arteries in symptomatic patients alone was 71% accurate. The class activation heatmaps demonstrated how the model learnt to localise the carotid artery within the image patch, and to ignore the arterial lumen when making its classification decision. Conclusions Deep learning can be used to differentiate between symptomatic and asymptomatic carotid CT scans from stroke/TIA subjects without the need for prior feature engineering. The model learns to identify relevant features in the image that predict the patients' symptom state. If further validated, this approach could be used to identify high-risk patients for intensive medical therapy. Funding Acknowledgement Type of funding source: Public grant(s) – National budget only. Main funding source(s): EPVL is undertaking a PhD funded by the Cambridge School of Clinical Medicine and the Medical Research Council's Doctoral Training Partnership
Background samples provide key contextual information for segmenting regions of interest (ROIs). However, they always cover a diverse set of structures, causing difficulties for the segmentation model to learn good decision boundaries with high sensitivity and precision. The issue concerns the highly heterogeneous nature of the background class, resulting in multi-modal distributions. Empirically, we find that neural networks trained with heterogeneous background struggle to map the corresponding contextual samples to compact clusters in feature space. As a result, the distribution over background logit activations may shift across the decision boundary, leading to systematic over-segmentation across different datasets and tasks. In this study, we propose context label learning (CoLab) to improve the context representations by decomposing the background class into several subclasses. Specifically, we train an auxiliary network as a task generator, along with the primary segmentation model, to automatically generate context labels that positively affect the ROI segmentation accuracy. Extensive experiments are conducted on several challenging segmentation tasks and datasets. The results demonstrate that CoLab can guide the segmentation model to map the logits of background samples away from the decision boundary, resulting in significantly improved segmentation accuracy. Code is available 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.