2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.627
|View full text |Cite
|
Sign up to set email alerts
|

One Network to Solve Them All — Solving Linear Inverse Problems Using Deep Projection Models

Abstract: While deep learning methods have achieved state-of-the-art performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, different problems require different networks. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and costly to use these specially-trained networks. On the other hand, traditional methods using signal priors can be us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
238
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 272 publications
(238 citation statements)
references
References 48 publications
0
238
0
Order By: Relevance
“…Some recent papers have approached learning the primal proximal operator [10,39] in the scope of ADMM, but these do not consider learning the dual. It is likely that learning the dual proximal offers an advantage since this allows the inclusion of various operators into the learning.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some recent papers have approached learning the primal proximal operator [10,39] in the scope of ADMM, but these do not consider learning the dual. It is likely that learning the dual proximal offers an advantage since this allows the inclusion of various operators into the learning.…”
Section: Discussionmentioning
confidence: 99%
“…An example of such an extension to solving inverse problems is [39], which learns an ADMM-like scheme for MRI reconstruction. Another is [10], which learns a "proximal" in an ADMM-like scheme for various image restoration problems. Finally, [30] considers solving finite dimensional linear inverse problems typically arising in image restoration.…”
Section: Survey Of the Fieldmentioning
confidence: 99%
“…random vector, typically with a Gaussian distribution. Consequently, to recover x 0 from a linear-AWGN measurement of the form y = Ax 0 + ξ, the compressed-sensing approach in (3) can be extended to a regularized least-squares problem [18] of the form x = G( z 0 ) for z 0 := arg min z 0…”
Section: A Inference With Deep Generative Priorsmentioning
confidence: 99%
“…This second step is often thought of as a denoising step. One approach to learning to solve inverse problems is to implicitly learn r(·) by explicitly learning a proximal operator in the form of a denoising autoencoder [18,27,28].…”
Section: Decoupledmentioning
confidence: 99%