2019
DOI: 10.1007/s10851-019-00923-x
|View full text |Cite
|
Sign up to set email alerts
|

Regularization by Architecture: A Deep Prior Approach for Inverse Problems

Abstract: The present paper studies the so called deep image prior (DIP) technique in the context of inverse problems. DIP networks have been introduced recently for applications in image processing, [50], also first experimental results for applying DIP to inverse problems have been reported [51]. This paper aims at discussing different interpretations of DIP and to obtain analytic results for specific network designs and linear operators. The main contribution is to introduce the idea of viewing these approaches as th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
82
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 90 publications
(84 citation statements)
references
References 51 publications
2
82
0
Order By: Relevance
“…The images started to deteriorate slowly for more iterations. For implementation details, as well as further numerical examples also showing the limitation of the DIP approach, see Dittmer et al (2018).…”
Section: Deep Learning For Magnetic Particle Imaging (Mpi)mentioning
confidence: 99%
See 2 more Smart Citations
“…The images started to deteriorate slowly for more iterations. For implementation details, as well as further numerical examples also showing the limitation of the DIP approach, see Dittmer et al (2018).…”
Section: Deep Learning For Magnetic Particle Imaging (Mpi)mentioning
confidence: 99%
“…We now briefly summarize the known theoretical foundations of DIP for inverse problems based on the recent paper by Dittmer, Kluth, Maass and Baguer (2018), who analyse and prove that certain network architectures in combination with suitable stopping rules do indeed lead to regularization schemes, which lead to the notion of ‘regularization by architecture’. We also include numerical results for the integration operator; more complex results for MPI are presented in Section 7.5.…”
Section: Learning In Functional Analytic Regularizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Several interpretations of its training objective in Eq. (3) have been discussed and theoretically analyzed in [6]. Of interest is the question if DIP can be further improved by adjusting the training objective.…”
Section: Dip: Background and Prior Workmentioning
confidence: 99%
“…As an alternative, we propose an approach based on an inverted perspective of Eq. (6). Specifically, we aim to search for a point in V that minimizes some distance to T , i.e., for an appropriate choice of some full rank matrix B ∈ R m×n , m ≤ n, we formulate the objective…”
Section: A Subspace Induced Dip Objectivementioning
confidence: 99%