2022
DOI: 10.48550/arxiv.2203.04382
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Regularized Training of Intermediate Layers for Generative Models for Inverse Problems

Abstract: Generative Adversarial Networks (GANs) have been shown to be powerful and flexible priors when solving inverse problems. One challenge of using them is overcoming representation error, the fundamental limitation of the network in representing any particular signal. Recently, multiple proposed inversion algorithms reduce representation error by optimizing over intermediate layer representations. These methods are typically applied to generative models that were trained agnostic of the downstream inversion algor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…With a similar motivation but a very different approach, various methods were proposed in [63], [32], [50], [31] based on optimizing intermediate layers in the neural network defining G, which helps to expand the range of the generator and mitigate representation error. Conditions were given under which the required number of measurements is provably smaller than Theorem 2, and improvements in the out-ofdistribution robustness were observed experimentally.…”
Section: E Further Developmentsmentioning
confidence: 99%
“…With a similar motivation but a very different approach, various methods were proposed in [63], [32], [50], [31] based on optimizing intermediate layers in the neural network defining G, which helps to expand the range of the generator and mitigate representation error. Conditions were given under which the required number of measurements is provably smaller than Theorem 2, and improvements in the out-ofdistribution robustness were observed experimentally.…”
Section: E Further Developmentsmentioning
confidence: 99%
“…With a similar motivation but a very different approach, various methods were proposed in [59], [28], [46] based on optimizing intermediate layers in the neural network defining G, which helps to expand the range of the generator and mitigate representation error. Conditions were given under which the required number of measurements is provably smaller than Theorem 2, and improvements in the out-ofdistribution robustness were observed experimentally.…”
Section: E Further Developmentsmentioning
confidence: 99%