2022
DOI: 10.1109/tmi.2022.3144619
|View full text |Cite
|
Sign up to set email alerts
|

NC-PDNet: A Density-Compensated Unrolled Network for 2D and 3D Non-Cartesian MRI Reconstruction

Abstract: Deep Learning has become a very promising avenue for magnetic resonance image (MRI) reconstruction. In this work, we explore the potential of unrolled networks for non-Cartesian acquisition settings. We design the NC-PDNet (Non-Cartesian Primal Dual Netwok), the first density-compensated (DCp) unrolled neural network, and validate the need for its key components via an ablation study. Moreover, we conduct some generalizability experiments to test this network in out-of-distribution settings, for example traini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
34
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 35 publications
(34 citation statements)
references
References 60 publications
0
34
0
Order By: Relevance
“…27 While this work did not compare the proposed DL reconstruction without data consistency against an unrolled-based network with data consistency, efficient GPU implementation of NUFFT would decrease the reconstruction time and facilitate implementation of the latter in clinical practice. 18 F I G U R E 6 (a) Spatial performance comparison of the reference GRASP (first row), with GRASPnet-2D (second row), GRASPnet-3D (third row), and GRASPnet-2D + time (fourth row) for four different slices and a representative contrast phase in a patient with a liver lesion (the same patient as in Figure 5). (b) SSIM and PSNR for the three DL reconstruction methods.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…27 While this work did not compare the proposed DL reconstruction without data consistency against an unrolled-based network with data consistency, efficient GPU implementation of NUFFT would decrease the reconstruction time and facilitate implementation of the latter in clinical practice. 18 F I G U R E 6 (a) Spatial performance comparison of the reference GRASP (first row), with GRASPnet-2D (second row), GRASPnet-3D (third row), and GRASPnet-2D + time (fourth row) for four different slices and a representative contrast phase in a patient with a liver lesion (the same patient as in Figure 5). (b) SSIM and PSNR for the three DL reconstruction methods.…”
Section: Discussionmentioning
confidence: 99%
“…Second, while signal intensity during acquisition remains unchanged in the dynamic imaging of motion (cardiac cine, respiratory motion), it varies in DCE‐MRI because of contrast uptake. Third, the unrolled‐loop networks still need to apply data consistency in each layer, which consists of two nonuniform fast Fourier transform (NUFFT) 17–19 operations per coil and time point, and thus reconstruction time would remain long.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…MC-PDNet was evaluated retrospectively in the Cartesian setting on 4fold under-sampled multi-contrast data. However future prospect will involve the extension to non-Cartesian acquisition by mimicking the NC-PDNet [19] and possibly the validation on larger public databases (e.g. OASIS-3).…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…In this work we introduce the MC-PDNet architecture that accumulates imaging contrasts as a supplementary channel dimension which allows us to share weights in the CNN across contrasts and limit the memory footprint. Moreover, for this proof of concept the focus will only pertain to single-coil 2D imaging and Cartesian data even though it could be easily extended to 3D imaging and for non-Cartesian readout following the ideas proposed in the NC-PDNet extension [19]. Finally, we present the results of retrospective studies performed on 4-fold under-sampled magnitudeonly data extracted from the above described in-house database that was constructed for analyzing the effects of aging on the brain.…”
Section: Introductionmentioning
confidence: 99%