Purpose Gadolinium‐based contrast agents (GBCAs) have been successfully applied in magnetic resonance (MR) imaging to facilitate better lesion visualization. However, gadolinium deposition in the human brain raised widespread concerns recently. On the other hand, although high‐resolution three‐dimensional (3D) MR images are more desired for most existing medical image processing algorithms, their long scan duration and high acquiring costs make 2D MR images still much more common clinically. Therefore, developing alternative solutions for 3D contrast‐enhanced MR image synthesis to replace GBCAs injection becomes an urgent requirement. Methods This study proposed a deep learning framework that produces 3D isotropic full‐contrast T2Flair images from 2D anisotropic noncontrast T2Flair image stacks. The super‐resolution (SR) and contrast‐enhanced (CE) synthesis tasks are completed in sequence by using an identical generative adversarial network (GAN) with the same techniques. To solve the problem that intramodality datasets from different scanners have specific combinations of orientations, contrasts, and resolutions, we conducted a region‐based data augmentation technique on the fly during training to simulate various imaging protocols in the clinic. We further improved our network by introducing atrous spatial pyramid pooling, enhanced residual blocks, and deep supervision for better quantitative and qualitative results. Results Our proposed method achieved superior CE‐synthesized performance in quantitative metrics and perceptual evaluation. In detail, the PSNR, structural‐similarity‐index, and AUC are 32.25 dB, 0.932, and 0.991 in the whole brain and 24.93 dB, 0.851, and 0.929 in tumor regions. The radiologists’ evaluations confirmed that our proposed method has high confidence in the diagnosis. Analysis of the generalization ability showed that benefiting from the proposed data augmentation technique, our network can be applied to “unseen” datasets with slight drops in quantitative and qualitative results. Conclusion Our work demonstrates the clinical potential of synthesizing diagnostic 3D isotropic CE brain MR images from a single 2D anisotropic noncontrast sequence.
Objective: High-resolution (HR) multi-modal magnetic resonance imaging (MRI) is crucial in clinical practice for accurate diagnosis and treatment. However, challenges such as budget constraints, potential contrast agent deposition, and image corruption often limit the acquisition of multiple sequences from a single patient. Therefore, the development of novel methods to reconstruct under-sampled images and synthesize missing sequences is crucial for clinical and research applications. Approach: In this paper, we propose a unified hybrid framework called SIFormer, which utilizes any available low-resolution (LR) MRI contrast configurations to complete super-resolution (SR) of poor-quality MR images and impute missing sequences simultaneously in one forward process. SIFormer consists of a hybrid generator and a convolution-based discriminator. The generator incorporates two key blocks. First, the dual branch attention (DBA) block combines the long-range dependency building capability of the transformer with the high-frequency local information capture capability of the convolutional neural network (CNN) in a channel-wise split manner. Second, we introduce a learnable gating adaptation multi-layer perception (GA MLP) in the feed-forward block to optimize information transmission efficiently. Main Results: Comparative evaluations against six state-of-the-art methods demonstrate that SIFormer achieves enhanced quantitative performance and produces more visually pleasing results for image SR and synthesis tasks across multiple datasets. Significance: Extensive experiments conducted on multi-center multi-contrast MRI datasets, including both healthy individuals and brain tumor patients, highlight the potential of our proposed method to serve as a valuable supplement to MRI sequence acquisition in clinical and research settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.