2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP) 2020
DOI: 10.1109/mlsp49062.2020.9231876
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Interior Point - Deep Learning Approach for Poisson Image Deblurring

Abstract: In this paper we address the problem of deconvolution of an image corrupted with Poisson noise by reformulating the restoration process as a constrained minimization of a suitable regularized data fidelity function. The minimization step is performed by means of an interior-point approach, in which the constraints are incorporated within the objective function through a barrier penalty and a forward-backward algorithm is exploited to build a minimizing sequence. The key point of our proposed scheme is that the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…Conclusion of the proof. Based on the previous calculations, Condition (32) is equivalent to (26). In addition, let us note that for every n ∈ {1, .…”
Section: Proposition 35mentioning
confidence: 97%
See 2 more Smart Citations
“…Conclusion of the proof. Based on the previous calculations, Condition (32) is equivalent to (26). In addition, let us note that for every n ∈ {1, .…”
Section: Proposition 35mentioning
confidence: 97%
“…The use of neural networks for solving inverse problems has become increasingly popular, especially in the image processing community. A rich panel of approaches have been proposed, either adapted to the sparsity of the data [18,19], or mimicking variational models [20,21], or iterating learned operators [22,23,24,25,26], or adpating Tikhonov method [27]. The successful numerical results of the aforementioned works raise two theoretical questions: when these methods are based on the iteration of a neural network, do they converge (in the sense of the algorithm)?…”
Section: Variational Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…The use of neural networks for solving inverse problems has become increasingly popular, especially in the image processing community. A rich panel of approaches have been proposed, either adapted to the sparsity of the data [5], [6], or mimicking variational models [7], [8], or iterating learned operators [9]- [13]. In iterative approaches, a regularization operator is learned, either in the form of a proximity operator as in [9], [10], [13], of a regularization term [14], of a pseudodiffential operator [15], or of its gradient [2], [16].…”
Section: Introductionmentioning
confidence: 99%
“…A rich panel of approaches have been proposed, either adapted to the sparsity of the data [5], [6], or mimicking variational models [7], [8], or iterating learned operators [9]- [13]. In iterative approaches, a regularization operator is learned, either in the form of a proximity operator as in [9], [10], [13], of a regularization term [14], of a pseudodiffential operator [15], or of its gradient [2], [16]. Strong connections also exist with Plug and Play methods [11], [17], [18], where the regularization operator is a pre-trained neural network.…”
Section: Introductionmentioning
confidence: 99%