2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00570
|View full text |Cite
|
Sign up to set email alerts
|

GAN-Based Projector for Faster Recovery With Convergence Guarantees in Linear Inverse Problems

Abstract: A Generative Adversarial Network (GAN) with generator G trained to model the prior of images has been shown to perform better than sparsity-based regularizers in illposed inverse problems. Here, we propose a new method of deploying a GAN-based prior to solve linear inverse problems using projected gradient descent (PGD). Our method learns a network-based projector for use in the PGD algorithm, eliminating expensive computation of the Jacobian of G. Experiments show that our approach provides a speed-up of 60-8… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
45
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(47 citation statements)
references
References 37 publications
(63 reference statements)
2
45
0
Order By: Relevance
“…It was shown in [11] that PnP convergence is guaranteed for ISTA and ADMM for a specially trained CNN denoiser, provided the data-fidelity is strongly convex. Apart from [11], PnP convergence has been established for CNN denoisers [47], [48], generative denoisers [5], and GAN-based projectors [49]. Moreover, it was shown in [50] that the DnCNN denoiser can be approximately expressed as the proximal operator of a nonconvex function.…”
Section: B Prior Workmentioning
confidence: 99%
“…It was shown in [11] that PnP convergence is guaranteed for ISTA and ADMM for a specially trained CNN denoiser, provided the data-fidelity is strongly convex. Apart from [11], PnP convergence has been established for CNN denoisers [47], [48], generative denoisers [5], and GAN-based projectors [49]. Moreover, it was shown in [50] that the DnCNN denoiser can be approximately expressed as the proximal operator of a nonconvex function.…”
Section: B Prior Workmentioning
confidence: 99%
“…Differential unrolled ADMM (DU-ADMM) has been proposed [15], which improves on the original OneNet, showing faster convergence and a reduction in overfitting during training. In a similar vein, Reference [16] used projected gradient descent (PGD) to allow a significant speed up in the convergence, by removing the need to compute the Jacobian of the gradient. Nevertheless, the training still requires generating artificial perturbed images, which the system learns to project back into the original so could be prone to bias due to the choice of perturbations.…”
Section: Deep Learning Based Techniquesmentioning
confidence: 99%
“…[9] first employed pre-trained generators to reconstruct signals, providing a recovery guarantee. Thereafter, a number of studies [13], [17], [19], [20], [22], [23] have been conducted to enhance the performance of CSPG. In contrast to CSPG, CSUG [12], [30] includes methods based on the deep image prior [31] so that the weights of an untrained generator can be trained using only one measurement vector y te .…”
Section: B Cs Via Nn With Generatorsmentioning
confidence: 99%
“…As neural networks (NNs) have achieved enormous success in both supervised learning including regression and classification tasks and unsupervised learning such as clustering and density estimation tasks, many researchers have recently devoted considerable effort to leveraging NNs as a structural assumption for CS [4], [9]- [20]. VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation