2022
DOI: 10.48550/arxiv.2201.10776
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DSFormer: A Dual-domain Self-supervised Transformer for Accelerated Multi-contrast MRI Reconstruction

Abstract: Multi-contrast MRI (MC-MRI) captures multiple complementary imaging modalities to aid in radiological decision-making. Given the need for lowering the time cost of multiple acquisitions, current deep accelerated MRI reconstruction networks focus on exploiting the redundancy between multiple contrasts. However, existing works are largely supervised with paired data and/or prohibitively expensive fully-sampled MRI sequences. Further, reconstruction networks typically rely on convolutional architectures which are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Similarly, SwinIR [53] based on residual Swin Transformer blocks [54] without any downsampling operation also showed significant advantages for image restoration tasks. Combining these networks with dual domain learning has also shown to yield superior medical image reconstruction performance [30], [44], [55]- [57]. Deploying these networks in our FTL could potentially further improve our denoising performance and will be an important direction of our future studies.…”
Section: Discussionmentioning
confidence: 97%
“…Similarly, SwinIR [53] based on residual Swin Transformer blocks [54] without any downsampling operation also showed significant advantages for image restoration tasks. Combining these networks with dual domain learning has also shown to yield superior medical image reconstruction performance [30], [44], [55]- [57]. Deploying these networks in our FTL could potentially further improve our denoising performance and will be an important direction of our future studies.…”
Section: Discussionmentioning
confidence: 97%
“…Applications of the transformer to image denoising, motion deblurring, and defocus deblurring have been reported ( 65 ). In the area of image reconstruction alone, transformers have demonstrated superior performance compared to conventional convolution neural networks ( 66 , 67 ).…”
Section: Deep Learning For Accelerationmentioning
confidence: 99%
“…Adaptions of the reconstruction techniques described above can involve, as examples, training on both the image domain and k-space or getting the neural network to uncover the optimal undersampling pattern in k-space. The first method, also known as dual-domain reconstruction, makes the reasonable and logical assumption that providing both object and frequency domain data for training should improve reconstruction quality ( 67 , 68 ). While this is, indeed, the finding, it also comes to no surprise that the improvement in reconstruction quality is modest, since the information contained in k-space is identical, no more and no less, than the information contained in the image.…”
Section: Deep Learning For Accelerationmentioning
confidence: 99%
“…Later, imaging model has been integrated into the learning pipeline, and model-based learning has achieved the state-of-the-art performance [1,9,13,18,19,21,22,23,26,27,30]. More recently, transformers have been integrated into the CNN-based networks for MRI undersampled reconstruction [4,6,16,29]. These networks can reconstruct MR images with a high fidelity.…”
Section: Introductionmentioning
confidence: 99%