2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00269
|View full text |Cite
|
Sign up to set email alerts
|

Structure Preserving Compressive Sensing MRI Reconstruction using Generative Adversarial Networks

Abstract: Compressive sensing magnetic resonance imaging (CS-MRI) accelerates the acquisition of MR images by breaking the Nyquist sampling limit. In this work, a novel generative adversarial network (GAN) based framework for CS-MRI reconstruction is proposed. Leveraging a combination of patchGAN discriminator and structural similarity index based loss, our model focuses on preserving high frequency content as well as fine textural details in the reconstructed image. Dense and residual connections have been incorporated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…Later, in [108], MSE loss of k-space is added. The authors of [109] proposed to utilize dense connection structure in the bottleneck part of U-Net. The authors of [110] proposed to use adversarial loss function in LSGAN [111].…”
Section: Non-parallel Imagingmentioning
confidence: 99%
“…Later, in [108], MSE loss of k-space is added. The authors of [109] proposed to utilize dense connection structure in the bottleneck part of U-Net. The authors of [110] proposed to use adversarial loss function in LSGAN [111].…”
Section: Non-parallel Imagingmentioning
confidence: 99%
“…Deora et al [ 53 ] developed a new generative adversarial network (GAN) framework for reconstructing CS-MRI. It improves the quality by combining the patch-based GAN discriminator and the structural similarity index loss.…”
Section: Soft Computingmentioning
confidence: 99%
“…Murugesan et al [96] and Emami et al [97] used context dependent/attention-guided GAN which has a feedback loop back to the generator input providing information focusing on local deviations from tissue. Mardani et al [98] and Deora et al [99] used residual skip connections inside each convolutional block of their generator. It is noteworthy that Mardani suggests the discriminator outputs can be used to focus on sensitive anatomies.…”
Section: Introductionmentioning
confidence: 99%