To present a deep learning-based reconstruction method for spatiotemporally encoded single-shot MRI to simultaneously obtain water and fat images. Methods: Spatiotemporally encoded MRI is an ultrafast branch that can encode chemical shift information due to its special quadratic phase modulation. A deep learning approach using a 2D U-Net was proposed to reconstruct spatiotemporally encoded signal and obtain water and fat images simultaneously. The training data for U-Net were generated by MRiLab software (version 1.3) with various synthetic models. Numerical simulations and experiments on ex vivo pork and in vivo rats at a 7.0 T Varian MRI system (Agilent Technologies, Santa Clara, CA) were performed, and the deep learning results were compared with those obtained by state-of-the-art algorithms. The structural similarity index and signal-to-ghost ratio were used to evaluate the residual artifact of different reconstruction methods.Results: With a well-trained neural network, the proposed deep learning approach can accomplish signal reconstruction within 0.46 s on a personal computer, which is comparable with the conjugate gradient method (0.41 s) and much faster than the state-of-the-art super-resolved water-fat image reconstruction method (30.31 s). The results of numerical simulations, ex vivo pork experiments, and in vivo rat experiments demonstrate that the deep learning approach can achieve better fidelity and higher spatial resolution compared to the other 2 methods. The deep learning approach also has a great advantage in artifact suppression, as indicated by the signal-to-ghost ratio results. Conclusion:Spatiotemporally encoded MRI with deep learning can provide ultrafast water-fat separation with better performance compared to the state-ofthe-art methods.
PurposeTo develop a deep learning‐based method, dubbed Denoising CEST Network (DECENT), to fully exploit the spatiotemporal correlation prior to CEST image denoising.MethodsDECENT is composed of two parallel pathways with different convolution kernel sizes aiming to extract the global and spectral features embedded in CEST images. Each pathway consists of a modified U‐Net with residual Encoder‐Decoder network and 3D convolution. Fusion pathway with 1 × 1 × 1 convolution kernel is utilized to concatenate two parallel pathways, and the output of DECENT is noise‐reduced CEST images. The performance of DECENT was validated in numerical simulations, egg white phantom experiments, and ischemic mouse brain and human skeletal muscle experiments in comparison with existing state‐of‐the‐art denoising methods.ResultsRician noise was added to CEST images to mimic a low SNR situation for numerical simulation, egg white phantom experiment, and mouse brain experiments, while human skeletal muscle experiments were of inherently low SNR. From the denoising results evaluated by peak SNR (PSNR) and structural similarity index (SSIM), the proposed deep learning‐based denoising method (DECENT) can achieve better performance compared to existing CEST denoising methods such as NLmCED, MLSVD, and BM4D, avoiding complicated parameter tuning or time‐consuming iterative processes.ConclusionsDECENT can well exploit the prior spatiotemporal correlation knowledge of CEST images and restore the noise‐free images from their noisy observations, outperforming state‐of‐the‐art denoising methods.
This work introduces and validates a deep-learning-based fitting method, which can rapidly provide accurate and robust estimation of cytological features of brain tumor based on the IMPULSED (imaging microstructural parameters using limited spectrally edited diffusion) model fitting with diffusion-weighted MRI data. Methods: The U-Net was applied to rapidly quantify extracellular diffusion coefficient (D ex ), cell size (d), and intracellular volume fraction (v in ) of brain tumor. At the training stage, the image-based training data, synthesized by randomizing quantifiable microstructural parameters within specific ranges, was used to train U-Net. At the test stage, the pre-trained U-Net was applied to estimate the microstructural parameters from simulated data and the in vivo data acquired on patients at 3T. The U-Net was compared with conventional non-linear least-squares (NLLS) fitting in simulations in terms of estimation accuracy and precision.Results: Our results confirm that the proposed method yields better fidelity in simulations and is more robust to noise than the NLLS fitting. For in vivo data, the U-Net yields obvious quality improvement in parameter maps, and the estimations of all parameters are in good agreement with the NLLS fitting. Moreover, our method is several orders of magnitude faster than the NLLS fitting (from about 5 min to <1 s). Conclusion:The image-based training scheme proposed herein helps to improve the quality of the estimated parameters. Our deep-learning-based fitting method can estimate the cell microstructural parameters fast and accurately.
This study assesses the feasibility of training a convolutional neural network (CNN) for IMPULSED (imaging microstructural parameters using limited spectrally edited diffusion) model fitting to diffusion-weighted (DW) data and evaluates its performance on a brain tumor (poorly differentiated adenocarcinoma) patient data directly acquired from clinical MR scanner. Comparisons were made with the results calculated from the non-linear least squares (NLLS) algorithm. More accurate and robust results were obtained by our CNN method, with processing speed several orders of magnitude faster than the reference method (from 5 min to 1 s).
Water-fat separation is a powerful tool in diagnosing many diseases and many efforts have been made to reduce the scan time. Spatiotemporally encoded (SPEN) single-shot MRI, as an emerging ultrafast MRI method, can accomplish the fastest water-fat separation since only one shot is required. However, the SPEN water/fat images obtained by the state-of-the art methods still have some shortcomings. Here, a deep learning approach based on U-Net was proposed to obtain SPEN water/fat images simultaneously with improved spatial resolution, better fidelity and reduced reconstruction time. The efficiency of our method is demonstrated by numerical simulations, and in vivo rat experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.