2021
DOI: 10.1002/mrm.28834
|View full text |Cite
|
Sign up to set email alerts
|

Real‐time deep artifact suppression using recurrent U‐Nets for low‐latency cardiac MRI

Abstract: This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 29 publications
0
17
0
Order By: Relevance
“…It was able to remove artifacts and segment images without user intervention and with low enough latency to provide almost real‐time monitoring. Latency could be further reduced by including (1) initialization of the pipeline before the start of acquisition to reduce the initial latency, (2) parallelization of gridding and deep artifact suppression for higher frame rate, and (3) using a memory‐based network to reconstruct the latest frame rather than blocks, while still using temporal redundancies 30 …”
Section: Discussionmentioning
confidence: 99%
“…It was able to remove artifacts and segment images without user intervention and with low enough latency to provide almost real‐time monitoring. Latency could be further reduced by including (1) initialization of the pipeline before the start of acquisition to reduce the initial latency, (2) parallelization of gridding and deep artifact suppression for higher frame rate, and (3) using a memory‐based network to reconstruct the latest frame rather than blocks, while still using temporal redundancies 30 …”
Section: Discussionmentioning
confidence: 99%
“…Küstner et al proposed the CINENet network for 3D+time cine CMR reconstruction and showed that it outperforms iterative reconstruction in terms of visual image quality and contrast [10]. These works clearly show the benefits of the proposed architectures and are now starting to be deployed into MR scanner software [15].…”
Section: A Acquisition and Reconstructionmentioning
confidence: 96%
“…To simulate the undersampled radial k-space acquisition, the images were organized into 3D matrices. The resulting matrix was Fourier transformed along the spatial domains and each (k x -k y ) space was masked by a radial pattern with a TR of 2.6 ms. For each sample, the number of projections per frame (t) was equal to P ∈ {7, 15,23,30,38,46,53,61,69,77,84,92,100,107,115,123,130,138,146,153,161,169,176,184,192,200,207,215,223, 230} corresponding to thirty different sampling rates. Accordingly, the corresponding acceleration factors R with respect to the radial fully-sampled data were {41.96, 19.58, 12.77, 9.79, 7.73, 6.39, 5.54, 4.82, 4.26, 3.81, 3.5, 3.19, 2.94, 2.74, 2.55, 2.39, 2.26, 2.13, 2.01, 1.92, 1.83, 1.74, 1.67, 1.60, 1.53 ,1.47, 1.42, 1.37, 1.32, 1.28}.…”
Section: Simulation Of Radial Acquisition Patternmentioning
confidence: 99%
“…It can be found from the leaderboard that most state-of-the-art methods (e.g., top-10 methods) employ U-Net 39 or its variants. 27,[40][41][42] Here, U-Net is the classical encoder-decoder form of a convolutional neural network. Despite great success, these methods are not optimal and have 4 problems: (1) classical U-Net is originally designed for data in the image domain, and directly applying the image-oriented U-Net in k-space data is not optimal for extracting features in the k-space domain 43,44 ; (2) classical U-Net is considered a heavyweight method (a large number of parameters) and hence is inefficient when cascaded many times for producing high-quality reconstruction 45 ;…”
Section: Introductionmentioning
confidence: 99%
“…It can be found from the leaderboard that most state‐of‐the‐art methods (e.g., top‐10 methods) employ U‐Net 39 or its variants 27,40–42 . Here, U‐Net is the classical encoder–decoder form of a convolutional neural network.…”
Section: Introductionmentioning
confidence: 99%