2021
DOI: 10.48550/arxiv.2110.12271
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-Validation: Early Stopping for Single-Instance Deep Generative Priors

Abstract: Recent works have shown the surprising effectiveness of deep generative models in solving numerous image reconstruction (IR) tasks, even without training data. We call these models, such as deep image prior and deep decoder, collectively as single-instance deep generative priors (SIDGPs). The successes, however, often hinge on appropriate early stopping (ES), which by far has largely been handled in an ad-hoc manner. In this paper, we propose the first principled method for ES when applying SIDGPs to IR, takin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 41 publications
0
1
0
Order By: Relevance
“…Choi and Lee [31] have studied a method that exploits all samples in low-resource sentence classification through methods that utilize early stopping and initialization parameters, which demonstrate their versatility across different applications. Moreover, two other studies, Wang et al [32], and Li et al [33], have utilized early stopping in the deep image prior and single-instance deep generative priors, respectively. Their work further indicates their potential utility in diverse deep-learning contexts.…”
Section: Introductionmentioning
confidence: 99%
“…Choi and Lee [31] have studied a method that exploits all samples in low-resource sentence classification through methods that utilize early stopping and initialization parameters, which demonstrate their versatility across different applications. Moreover, two other studies, Wang et al [32], and Li et al [33], have utilized early stopping in the deep image prior and single-instance deep generative priors, respectively. Their work further indicates their potential utility in diverse deep-learning contexts.…”
Section: Introductionmentioning
confidence: 99%