To demonstrate the repeatability of fast 3D T 1 mapping using Magnetization-Prepared Golden-angle RAdial Sparse Parallel (MP-GRASP) MRI and its robustness to variation of imaging parameters including flip angle and spatial resolution in phantoms and the brain. Theory and Methods:Multiple imaging experiments were performed to (1) assess the robustness of MP-GRASP T 1 mapping to B 1 inhomogeneity using a single tube phantom filled with uniform MnCl 2 liquid; (2) compare the repeatability of T 1 mapping between MP-GRASP and inversion recovery-based spin-echo (IR-SE; over 12 scans), using a commercial T1MES phantom; (3) evaluate the longitudinal variation of T 1 estimation using MP-GRASP with varying imaging parameters, including spatial resolution, flip angle, TR/TE, and acceleration rate, using the T1MES phantom (106 scans performed over a period of 12 months); and (4) evaluate the variation of T 1 estimation using MP-GRASP with varying imaging parameters in the brain (24 scans in a single visit). In addition, the accuracy of MP-GRASP T 1 mapping was also validated against IR-SE by performing linear correlation and calculating the Lin's concordance correlation coefficient (CCC).Results: MP-GRASP demonstrates good robustness to B 1 inhomogeneity, with intra-slice variability below 1% in the single tube phantom experiment. The longitudinal variability is good both in the phantom (below 2.5%) and in the brain (below 2%) with varying imaging parameters. The T 1 values estimated from MP-GRASP are accurate compared to that from the IR-SE imaging (R 2 = 0.997, Lin's CCC = 0.996). Conclusion: MP-GRASP shows excellent repeatability of T 1 estimation over time, andit is also robust to variation of different imaging parameters evaluated in this study.
T1 mapping is increasingly used in clinical practice and research studies. With limited scan time, existing techniques often have limited spatial resolution, contrast resolution and slice coverage. High fat concentrations yield complex errors in Look–Locker T1 methods. In this study, a dual‐echo 2D radial inversion‐recovery T1 (DEradIR‐T1) technique was developed for fast fat–water separated T1 mapping. The DEradIR‐T1 technique was tested in phantoms, 5 volunteers and 28 patients using a 3 T clinical MRI scanner. In our study, simulations were performed to analyze the composite (fat + water) and water‐only T1 under different echo times (TE). In standardized phantoms, an inversion‐recovery spin echo (IR‐SE) sequence with and without fat saturation pulses served as a T1 reference. Parameter mapping with DEradIR‐T1 was also assessed in vivo, and values were compared with modified Look–Locker inversion recovery (MOLLI). Bland–Altman analysis and two‐tailed paired t‐tests were used to compare the parameter maps from DEradIR‐T1 with the references. Simulations of the composite and water‐only T1 under different TE values and levels of fat matched the in vivo studies. T1 maps from DEradIR‐T1 on a NIST phantom (Pcomp = 0.97) and a Calimetrix fat–water phantom (Pwater = 0.56) matched with the references. In vivo T1 was compared with that of MOLLI: Rcomp2=0.77; Rwater2=0.72. In this work, intravoxel fat is found to have a variable, echo‐time‐dependent effect on measured T1 values, and this effect may be mitigated using the proposed DRradIR‐T1.
Conventional water–fat separation approaches suffer long computational times and are prone to water/fat swaps. To solve these problems, we propose a deep learning-based dual-echo water–fat separation method. With IRB approval, raw data from 68 pediatric clinically indicated dual echo scans were analyzed, corresponding to 19382 contrast-enhanced images. A densely connected hierarchical convolutional network was constructed, in which dual-echo images and corresponding echo times were used as input and water/fat images obtained using the projected power method were regarded as references. Models were trained and tested using knee images with 8-fold cross validation and validated on out-of-distribution data from the ankle, foot, and arm. Using the proposed method, the average computational time for a volumetric dataset with ~400 slices was reduced from 10 min to under one minute. High fidelity was achieved (correlation coefficient of 0.9969, l1 error of 0.0381, SSIM of 0.9740, pSNR of 58.6876) and water/fat swaps were mitigated. I is of particular interest that metal artifacts were substantially reduced, even when the training set contained no images with metallic implants. Using the models trained with only contrast-enhanced images, water/fat images were predicted from non-contrast-enhanced images with high fidelity. The proposed water–fat separation method has been demonstrated to be fast, robust, and has the added capability to compensate for metal artifacts.
We design a data-driven method to generate water/fat images from dual-echo complex Dixon images, aimed at near-instant water-fat separation with high robustness. A hierarchical convolutional neural network is employed, where ground truth images are obtained using a binary quadratic optimization approach. With IRB approval and informed consent, 9281 image sets are collected from 30 pediatric patients to train and test networks, with the application of six-fold cross validation. In addition to high fidelity and significantly reduced processing time, the predicted images are superior to the ground truth in mitigation of water/fat swaps and correction of artifacts introduced by metallic implants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.