2022
DOI: 10.1109/jstsp.2022.3170654
|View full text |Cite
|
Sign up to set email alerts
|

PUERT: Probabilistic Under-Sampling and Explicable Reconstruction Network for CS-MRI

Abstract: Compressed Sensing MRI (CS-MRI) aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging, thus presenting two basic issues, i.e., where to sample and how to reconstruct. To deal with both problems simultaneously, we propose a novel end-to-end Probabilistic Under-sampling and Explicable Reconstruction neTwork, dubbed PUERT, to jointly optimize the sampling pattern and the reconstruction network. Instead of learning a deterministic mask, the proposed sampling subne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 76 publications
0
7
0
Order By: Relevance
“…LOUPE [34] and PUERT [22] both assume that each binary sampling element is an independent Bernoulli random variable and learn a probabilistic sampling pattern rather than a deterministic mask. PUERT introduces an effective gradient estimation strategy when using the binarization function,and uses DUN to make full use of the intrinsic structural features of mask knowledge at each stage to realize the combination of deep learning and traditional models.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…LOUPE [34] and PUERT [22] both assume that each binary sampling element is an independent Bernoulli random variable and learn a probabilistic sampling pattern rather than a deterministic mask. PUERT introduces an effective gradient estimation strategy when using the binarization function,and uses DUN to make full use of the intrinsic structural features of mask knowledge at each stage to realize the combination of deep learning and traditional models.…”
Section: Related Workmentioning
confidence: 99%
“…Finding an effective way to solve the PM step is crucial for CS problems. Since the Prox R operator is similar to denoising, a lot of work [22,29] has been done to learn this operator using a denoising network to obtain adaptive regularized R. Similarly, in this paper, the traditional Equation (16b) formula is converted to the form of a deep network to build a deep unfolding network (Equation ( 17)). In this way, the lower-level optimization problem is transformed into a phase reconstruction problem, and each stage adds the residual of the previous stage data items to the next stage of training.…”
Section: Reconstruction Subnetmentioning
confidence: 99%
See 3 more Smart Citations