2020
DOI: 10.1007/978-3-030-58621-8_28
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Bayesian Deep Learning for Image Recovery with Applications to Compressive Sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 36 publications
0
9
0
Order By: Relevance
“…Several studies have investigated self-supervised learning for MRI reconstruction. [197][198][199][200][201][202][203][204][205][206][207][208][209][210][211][212] For instance, Yaman et al 197 introduced a self-supervised approach (SSDU) in which the acquired undersampled data indices are divided into a set of k-space positions used in the network's DC layer during training, and a set of k-space positions used within the loss function. This is a classic work in self-supervised MRI reconstruction, offering valuable insights for subsequent self-supervised learning methods.…”
Section: Unsupervised DL For Fast Mrimentioning
confidence: 99%
See 2 more Smart Citations
“…Several studies have investigated self-supervised learning for MRI reconstruction. [197][198][199][200][201][202][203][204][205][206][207][208][209][210][211][212] For instance, Yaman et al 197 introduced a self-supervised approach (SSDU) in which the acquired undersampled data indices are divided into a set of k-space positions used in the network's DC layer during training, and a set of k-space positions used within the loss function. This is a classic work in self-supervised MRI reconstruction, offering valuable insights for subsequent self-supervised learning methods.…”
Section: Unsupervised DL For Fast Mrimentioning
confidence: 99%
“…Several studies have investigated self‐supervised learning for MRI reconstruction 197–212 . For instance, Yaman et al 197 introduced a self‐supervised approach (SSDU) in which the acquired undersampled data indices are divided into a set of k‐space positions used in the network's DC layer during training, and a set of k‐space positions used within the loss function.…”
Section: Paradigm Shift and Applications For Mri Reconstructionmentioning
confidence: 99%
See 1 more Smart Citation
“…A closely related line of work known as plug-and-play (P&P) [49] used denoisers as regularizers for solving inverse problems. A number of recent extensions have used this concept to develop MAP solutions for inverse problems [50][51][52][53][54][55][56][57][58][59][60][61].…”
Section: Related Workmentioning
confidence: 99%
“…While most of the models we talked about are within the scope of supervised learning, unsupervised (and self-supervised) learning is another prevailing approach for image reconstruction [10,63,74,75,31,7,37,50,57,53,54]. In comparison with the supervised learning-based approach, unsupervised methods are more reliant on the regularization term (i.e., r(•) in (2.1)) due to the lack of labels.…”
Section: Improving Image Reconstruction With Deep Learningmentioning
confidence: 99%