2021
DOI: 10.48550/arxiv.2107.11191
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Regularising Inverse Problems with Generative Machine Learning Models

Abstract: Deep neural network approaches to inverse imaging problems have produced impressive results in the last few years. In this paper, we consider the use of generative models in a variational regularisation approach to inverse problems. The considered regularisers penalise images that are far from the range of a generative model that has learned to produce images similar to a training dataset. We name this family generative regularisers. The success of generative regularisers depends on the quality of the generati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 60 publications
0
8
0
Order By: Relevance
“…For a 3D imaging problem, one could for example attribute equal importance to all spatial-directions or opt for a construction as in (30), if for example the z-direction has a different resolution than the x-and y-directions. Moreover, for complex-valued images, it seems intuitive to share the same regularization map across the real and the imaginary parts of the images.…”
Section: Obtaining the Regularization Parameter-map Via A Cnnmentioning
confidence: 99%
See 1 more Smart Citation
“…For a 3D imaging problem, one could for example attribute equal importance to all spatial-directions or opt for a construction as in (30), if for example the z-direction has a different resolution than the x-and y-directions. Moreover, for complex-valued images, it seems intuitive to share the same regularization map across the real and the imaginary parts of the images.…”
Section: Obtaining the Regularization Parameter-map Via A Cnnmentioning
confidence: 99%
“…for some network u Θ with trainable parameters Θ and input s [30,44,47,78], by substituting proximal operators in classical iterative schemes by learned NN denoisers (in a "plug-and-play" fashion) [65,74], or by using learned iterative schemes [2,3,35,48], see also the review papers [7,64,67]. Since one of our choices for the iterative scheme (8) will be the Primal-Dual Hybrid Gradient method (PDHG) of Chambolle and Pock [17], our approach is related to the Learned Primal-Dual method [3], where the proximal operators in the primal and dual step of PDHG are fully substituted by learnable networks.…”
Section: Introductionmentioning
confidence: 99%
“…We model the images using a generative model incorporating it as part of a regulariser for the inverse problem reconstruction. There has been previous work in this area (Bora et al 2017, Dhar et al 2018, Tripathi et al 2018, Duff et al 2021, Habring and Holler 2022, and we extend it with the addition of a structured image covariance network. We train our generative model without paired training data and without knowledge of the forward problem and point to the interesting works of Zhang et al (2021) and Jalal et al (2021) for examples of these settings.…”
Section: Contributionsmentioning
confidence: 99%
“…0 , [ ]is a similarity measure, ensuring that the reconstructed image matches the data, and  ¥   : 0, [ ] is a regularization term which is small if the image satisfies some desired property. We extend recent work, including (Bora et al 2017, Dhar et al 2018, Tripathi et al 2018, Duff et al 2021, Habring and Holler 2022 that consider the use of a generative machine learning model as part of the regularizer, called generative regularizers. A latent generative model is designed to take a sample from a distribution in a lowdimensional latent space and generate data similar to the training distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Also, diffusion models [39,40,76,77] have shown great generative modelling capabilities and have been used as a prior for inverse problems. Moreover, other generative models, such as GANs [4,26,60] or VAEs [44], have been used as a regularizer, see the recent review [20] and references therein. However, even if these methods allow an unsupervised reconstruction, their training is often computationally costly and a huge amount of training images is required.…”
Section: Introductionmentioning
confidence: 99%