Objectives Gadolinium-based contrast agents (GBCAs) have become an integral part in daily clinical decision making in the last 3 decades. However, there is a broad consensus that GBCAs should be exclusively used if no contrast-free magnetic resonance imaging (MRI) technique is available to reduce the amount of applied GBCAs in patients. In the current study, we investigate the possibility of predicting contrast enhancement from noncontrast multiparametric brain MRI scans using a deep-learning (DL) architecture. Materials and Methods A Bayesian DL architecture for the prediction of virtual contrast enhancement was developed using 10-channel multiparametric MRI data acquired before GBCA application. The model was quantitatively and qualitatively evaluated on 116 data sets from glioma patients and healthy subjects by comparing the virtual contrast enhancement maps to the ground truth contrast-enhanced T1-weighted imaging. Subjects were split in 3 different groups: enhancing tumors (n = 47), nonenhancing tumors (n = 39), and patients without pathologic changes (n = 30). The tumor regions were segmented for a detailed analysis of subregions. The influence of the different MRI sequences was determined. Results Quantitative results of the virtual contrast enhancement yielded a sensitivity of 91.8% and a specificity of 91.2%. T2-weighted imaging, followed by diffusion-weighted imaging, was the most influential sequence for the prediction of virtual contrast enhancement. Analysis of the whole brain showed a mean area under the curve of 0.969 ± 0.019, a peak signal-to-noise ratio of 22.967 ± 1.162 dB, and a structural similarity index of 0.872 ± 0.031. Enhancing and nonenhancing tumor subregions performed worse (except for the peak signal-to-noise ratio of the nonenhancing tumors). The qualitative evaluation by 2 raters using a 4-point Likert scale showed good to excellent (3–4) results for 91.5% of the enhancing and 92.3% of the nonenhancing gliomas. However, despite the good scores and ratings, there were visual deviations between the virtual contrast maps and the ground truth, including a more blurry, less nodular-like ring enhancement, few low-contrast false-positive enhancements of nonenhancing gliomas, and a tendency to omit smaller vessels. These “features” were also exploited by 2 trained radiologists when performing a Turing test, allowing them to discriminate between real and virtual contrast-enhanced images in 80% and 90% of the cases, respectively. Conclusions The introduced model for virtual gadolinium enhancement demonstrates a very good quantitative and qualitative performance. Future systematic studies in larger patient collectives with varying neurological disorders need to evaluate if the introduced virtual contrast enhancement might reduce GBCA exposure in clinical practice.
Magnetic Resonance Imaging (MRI) offers strong soft tissue contrast but suffers from long acquisition times and requires tedious annotation from radiologists. Traditionally, these challenges have been addressed separately with reconstruction and image analysis algorithms. To see if performance could be improved by treating both as end-to-end, we hosted the K2S challenge, in which challenge participants segmented knee bones and cartilage from 8× undersampled k-space. We curated the 300-patient K2S dataset of multicoil raw k-space and radiologist quality-checked segmentations. 87 teams registered for the challenge and there were 12 submissions, varying in methodologies from serial reconstruction and segmentation to end-to-end networks to another that eschewed a reconstruction algorithm altogether. Four teams produced strong submissions, with the winner having a weighted Dice Similarity Coefficient of 0.910 ± 0.021 across knee bones and cartilage. Interestingly, there was no correlation between reconstruction and segmentation metrics. Further analysis showed the top four submissions were suitable for downstream biomarker analysis, largely preserving cartilage thicknesses and key bone shape features with respect to ground truth. K2S thus showed the value in considering reconstruction and image analysis as end-to-end tasks, as this leaves room for optimization while more realistically reflecting the long-term use case of tools being developed by the MR community.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.