Recent works have shown the surprising effectiveness of deep generative models in solving numerous image reconstruction (IR) tasks, even without training data. We call these models, such as deep image prior and deep decoder, collectively as single-instance deep generative priors (SIDGPs). The successes, however, often hinge on appropriate early stopping (ES), which by far has largely been handled in an ad-hoc manner. In this paper, we propose the first principled method for ES when applying SIDGPs to IR, taking advantage of the typical bell trend of the reconstruction quality. In particular, our method is based on collaborative training and self-validation: the primal reconstruction process is monitored by a deep autoencoder, which is trained online with the historic reconstructed images and used to validate the reconstruction quality constantly. Experimentally, on several IR problems and different SIDGPs, our self-validation method is able to reliably detect near-peak performance and signal good ES points. Our code is available at https://sun-umn.github.io/Self-Validation/.
IntroductionValidation-based ES is one of the most reliable strategies for controlling generalization errors in supervised learning, especially with potentially overspecified models such as in gradient boosting and modern DNNs [10,28,47]. Beyond supervised learning, ES often remains critical to learning success, but there are no principled ways-universal as validation for supervised learning-to decide when to stop. In this paper, we make the first step toward filling in the gap, and focus on solving IR, a central family of inverse problems, using training-free deep generative models.