2021
DOI: 10.48550/arxiv.2104.09895
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Posterior Sampling for Image Restoration using Explicit Patch Priors

Roy Friedman,
Yair Weiss

Abstract: Almost all existing methods for image restoration are based on optimizing the mean squared error (MSE), even though it is known that the best estimate in terms of MSE may yield a highly atypical image due to the fact that there are many plausible restorations for a given noisy image. In this paper, we show how to combine explicit priors on patches of natural images in order to sample from the posterior probability of a full image given a degraded image. We prove that our algorithm generates correct samples fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…When is posterior sampling optimal? Many recent image restoration methods attempt to produce diverse high perceptual quality reconstructions by sampling from the posterior distribution p X|Y [7,18,10]. As discussed in [4], the posterior sampling estimator attains a perception index of 0 (namely W 2 (p X , p X ) = 0) and distortion 2D * .…”
Section: The Wasserstein and Gelbrich Distancesmentioning
confidence: 99%
See 2 more Smart Citations
“…When is posterior sampling optimal? Many recent image restoration methods attempt to produce diverse high perceptual quality reconstructions by sampling from the posterior distribution p X|Y [7,18,10]. As discussed in [4], the posterior sampling estimator attains a perception index of 0 (namely W 2 (p X , p X ) = 0) and distortion 2D * .…”
Section: The Wasserstein and Gelbrich Distancesmentioning
confidence: 99%
“…3.1). Further simplification arises when γ, µ are centered non-singular Gaussian measures, in which case T γ→µ is the linear and symmetric transformation (7). Then, γ t is a Gaussian measure with covariance Σ γt = T t Σ γ T t , where T t [I + t(T γ→µ − I)].…”
Section: A Geometric Perspective On the Distortion-perception Tradeoffmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, [93] proposed the negative log likelihood of all patches of an image as a regularizer, where the underlying patch distribution was assumed to follow a Gaussian mixture model (GMM) which parameters were learned from few clean images. This method is still competitive to many approaches based on deep learning and several extensions were suggested recently [22,61,72]. However, even though GMMs can approximate any probability density function if the number of components is large enough, they suffer from limited flexibility in case of a fixed number of components, see [23] and the references therein.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the authors of [71] proposed the negative log likelihood of all patches of an image as a regularizer, where the underlying patch distribution was assumed to follow a Gaussian mixture model (GMM) which parameters were learned from few clean images. The arising method is still competitive to many approaches based on deep learning and several improvements and extensions were suggested recently [16,49,57]. However, even though GMMs can approximate any probability density function if the number of components is large enough, they suffer from limited flexibility in case of a fixed number of components, see [17] and the references therein.…”
Section: Introductionmentioning
confidence: 99%