2020
DOI: 10.1002/mp.14539
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal MRI synthesis using unified generative adversarial networks

Abstract: Purpose Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time‐consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis. Methods A unifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

4
69
4

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 57 publications
(77 citation statements)
references
References 46 publications
4
69
4
Order By: Relevance
“…This leads to superior quality of translated images compared to traditional Cycle-GAN models as well as the novel capability of flexibly translating an input image to any desired target domain. StarGAN was previously used in computer vision tasks such as a facial attribute transfer and a facial expression synthesis tasks [13], and was recently used for multimodal MRI synthesis [14].…”
Section: Starganmentioning
confidence: 99%
“…This leads to superior quality of translated images compared to traditional Cycle-GAN models as well as the novel capability of flexibly translating an input image to any desired target domain. StarGAN was previously used in computer vision tasks such as a facial attribute transfer and a facial expression synthesis tasks [13], and was recently used for multimodal MRI synthesis [14].…”
Section: Starganmentioning
confidence: 99%
“…Inspired by the tremendous success of deep learning in computer vision, 24–27 deep learning‐based methods have been recently investigated for medical image reconstruction, 28–31 analysis, 32–36 and synthesis 37,38 . Studies have demonstrated deep learning‐based approaches significantly outperform over CS‐based methods for image reconstruction 39‐42 .…”
Section: Introductionmentioning
confidence: 99%
“…[20][21][22][23] Inspired by the tremendous success of deep learning in computer vision, [24][25][26][27] deep learning-based methods have been recently investigated for medical image reconstruction, [28][29][30][31] analysis, [32][33][34][35][36] and synthesis. 37,38 Studies have demonstrated deep learning-based approaches significantly outperform over CS-based methods for image reconstruction. [39][40][41][42] Deep learning-based US image reconstruction algorithms and US beamforming methods have been proposed for processing both fully sampled and subsampled US RF data.…”
Section: Introductionmentioning
confidence: 99%
“…These methods either synthesize one modality from another (i.e., cross-modality) or map both modalities to a commonly shared domain. Specifically, generative adversarial networks (GANs) have held great promise in predicting medical images of different brain image modalities from a given modality [5,6,7]. For instance, [5] suggested a joint neuroimage synthesis and representation learning (JSRL) framework with transfer learning for subjective cognitive decline conversion prediction where they imputed missing PET images using MRI scans.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, [5] suggested a joint neuroimage synthesis and representation learning (JSRL) framework with transfer learning for subjective cognitive decline conversion prediction where they imputed missing PET images using MRI scans. In addition, [6] proposed a unified GAN to train only a single generator and a single discriminator to learn the mappings among images of four different modalities. Furthermore, [7] translated a T1-weighted magnetic resonance imaging (MRI) to T2-weighted MRI using GAN.…”
Section: Introductionmentioning
confidence: 99%