2020
DOI: 10.1002/mrm.28378
|View full text |Cite
|
Sign up to set email alerts
|

Self‐supervised learning of physics‐guided reconstruction neural networks without fully sampled reference data

Abstract: Purpose To develop a strategy for training a physics‐guided MRI reconstruction neural network without a database of fully sampled data sets. Methods Self‐supervised learning via data undersampling (SSDU) for physics‐guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with full… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
268
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 199 publications
(269 citation statements)
references
References 75 publications
(266 reference statements)
0
268
0
1
Order By: Relevance
“…A key obstacle for the application of such techniques to large-scale 3D image reconstruction problems is their computational cost, as they involve training a new network for every 2D slice of the reconstruction. For inverse problems, approaches that rely on splitting the measurement data have recently been proposed for magnetic resonance imaging (MRI) [17], [31] and Cryo-transmission electron microscopy (Cryo-EM) [16] showing image quality improvement with respect to denoising applied on the reconstructed image. While these results are highly promising, a solid theoretical underpinning that allows analysis and insights into the interplay between the underlying noise model of the inverse problem and the obtained solution is currently lacking.…”
Section: Introductionmentioning
confidence: 99%
“…A key obstacle for the application of such techniques to large-scale 3D image reconstruction problems is their computational cost, as they involve training a new network for every 2D slice of the reconstruction. For inverse problems, approaches that rely on splitting the measurement data have recently been proposed for magnetic resonance imaging (MRI) [17], [31] and Cryo-transmission electron microscopy (Cryo-EM) [16] showing image quality improvement with respect to denoising applied on the reconstructed image. While these results are highly promising, a solid theoretical underpinning that allows analysis and insights into the interplay between the underlying noise model of the inverse problem and the obtained solution is currently lacking.…”
Section: Introductionmentioning
confidence: 99%
“…Higher acceleration rates in MRI have recently been achieved with physics-based deep learning methods that use algorithm unrolling to solve a regularized least-squares problem with a learned regularizer. [50][51][52][53][54][55][56] The iterative process for solving ROCK-SPIRiT naturally lends itself to such algorithm unrolling; thus, these approaches can be used to further improve regularization quality without manual parameter tuning, especially for higher acceleration rates. However, this was not explored, as we only have a limited number of such cine volumes.…”
Section: Discussionmentioning
confidence: 99%
“…Recent works have also proposed the new concept of self-supervised learning for MRI reconstruction. 10,11 An early study has shown that a denoising deep learning network can be successfully trained using pairs of noisy images. 7 Self-supervised learning relies on a hypothesis that image noise and artifacts are typically incoherent in training data pairs; thus, minimizing a loss between them readily regularizes the learning to capture coherent image content.…”
Section: F I G U R Ementioning
confidence: 99%
“…More recently, several works have investigated unsupervised or self-supervised learning for the reconstruction of undersampled static MR images. [7][8][9][10][11][12] Although the specific implementations of these works vary from one to the other, they all train CNNs on undersampled data sets directly without fully sampled references, and inherent MR physical models (eg, Fourier encoding and coil sensitivity encoding) are incorporated as training regularizations. The results in these works have shown that with proper design of network training, unsupervised or self-supervised learning can achieve similar reconstruction performance compared with supervised learning.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation