2018
DOI: 10.1137/17m1135694
|View full text |Cite
|
Sign up to set email alerts
|

High-Dimensional Mixture Models for Unsupervised Image Denoising (HDMI)

Abstract: This work addresses the problem of patch-based image denoising through the unsupervised learning of a probabilistic high-dimensional mixture models on the noisy patches. The model, named hereafter HDMI, proposes a full modeling of the process that is supposed to have generated the noisy patches. To overcome the potential estimation problems due to the high dimension of the patches, the HDMI model adopts a parsimonious modeling which assumes that the data live in group-specific subspaces of low dimensionalities… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
60
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 50 publications
(61 citation statements)
references
References 45 publications
1
60
0
Order By: Relevance
“…Of these tasks, estimating the parameters of GGMM directly on noisy observations is a problem of particular interest, that could benefit from our approximations. Learning GMM priors on noisy patches has been shown to be useful in patch-based image restoration when clean patches are not available a priori, or to further adapt the model to the specificities of a given noisy image [66,60,26]. Another open problem is to analyze the asymptotic behavior of the minimum mean square estimator (MMSE) shrinkage with GGD prior, as an alternative to MAP shrinkage.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Of these tasks, estimating the parameters of GGMM directly on noisy observations is a problem of particular interest, that could benefit from our approximations. Learning GMM priors on noisy patches has been shown to be useful in patch-based image restoration when clean patches are not available a priori, or to further adapt the model to the specificities of a given noisy image [66,60,26]. Another open problem is to analyze the asymptotic behavior of the minimum mean square estimator (MMSE) shrinkage with GGD prior, as an alternative to MAP shrinkage.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Alternatively, modeling the distribution of patches of an image (i.e., small windows usually of size 8×8) has proven to be a powerful solution. In particular, popular patch techniques rely on non-local self-similarity [8], fields of experts [52], learned patch dictionaries [2,19,53], sparse or low-rank properties of stacks of similar patches [12,13,34,29], patch re-occurrence priors [37], or more recently mixture models patch priors [67,66,61,60,26,56,44].…”
mentioning
confidence: 99%
“…Zoran and Weiss [19] proposed the values of β to be (β 1 , β 2 , β 3 , β 4 , β 5 ) = (1,4,8,16,32,64). In this section we verify that these betas are indeed the best.…”
Section: Choice Of β Valuesmentioning
confidence: 99%
“…We kept N = 2 · 10 6 and K = 200 but we now use a smaller patch size √ d = 6 because of memory limitations. Four components of the GMM are represented in Figures 6,7,8,9. In Table 10, we compare all the denoising methods previously introduced: channel by channel with the 3 different color bases and with the prior learned in color. The study was conducted on 10 test images from the test set of images.…”
Section: Learning In Colormentioning
confidence: 99%
See 1 more Smart Citation