2018
DOI: 10.1109/tsp.2017.2777407
|View full text |Cite
|
Sign up to set email alerts
|

Learning Convex Regularizers for Optimal Bayesian Denoising

Abstract: We propose a data-driven algorithm for the maximum a posteriori (MAP) estimation of stochastic processes from noisy observations. The primary statistical properties of the sought signal is specified by the penalty function (i.e., negative logarithm of the prior probability density function). Our alternating direction method of multipliers (ADMM)-based approach translates the estimation task into successive applications of the proximal mapping of the penalty function. Capitalizing on this direct link, we define… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…For given noise statistics the regularization parameter could be learned; however, it would perform sub optimally for different noise statistics and would require retraining. Future systems should learn regularization parameters that can be adapted post training to account for variable noise levels similar to in [55].…”
Section: Discussionmentioning
confidence: 99%
“…For given noise statistics the regularization parameter could be learned; however, it would perform sub optimally for different noise statistics and would require retraining. Future systems should learn regularization parameters that can be adapted post training to account for variable noise levels similar to in [55].…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, researchers have considered more general parametric nonlinearities whose weights are learned during training. Such models involve linear combinations of Gaussian radial-basis functions [28] and cubic B-splines [29], [30].…”
Section: B Link With Iterative Soft-thresholding Algorithmsmentioning
confidence: 99%
“…Recalling the ADMM algorithm from Section 4.4, we know that solving a regularized inverse problem involves applying the proximal operator associated with the potential function. We can implicitly specify the potential function by learning the proximal operator, e.g., by parameterizing it using 1D B-splines [55,56]. We can see prox learning as a generalization of tuning the regularization weight: instead of scaling a known function, we deform a parametric function.…”
Section: Learning the Regularization Termmentioning
confidence: 99%