2021
DOI: 10.1088/1361-6420/ac245d
|View full text |Cite
|
Sign up to set email alerts
|

Learning regularization parameters of inverse problems via deep neural networks

Abstract: In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from observation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data are computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 32 publications
(42 citation statements)
references
References 78 publications
1
41
0
Order By: Relevance
“…Using the same computational setup as before, we consider varying sizes of the image in Figure 1, left panel. Note that in this setup we also slightly vary the regularization parameter µ ∈ [1,20] to obtain more realistic timings for "near optimal" regularization parameters. Additionally, in our comparison we include timings for a generalized Tikhonov approach of the form (1.3).…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 4 more Smart Citations
“…Using the same computational setup as before, we consider varying sizes of the image in Figure 1, left panel. Note that in this setup we also slightly vary the regularization parameter µ ∈ [1,20] to obtain more realistic timings for "near optimal" regularization parameters. Additionally, in our comparison we include timings for a generalized Tikhonov approach of the form (1.3).…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…There is a substantive literature on determining the regularization parameter µ in (1.3), for which details are available in texts such as [39,40,3]. These range from techniques that do not need any information about the statistical distribution of the noise in the data, such as the L-curve that trades off between the data fit and the regularizer [40], the method of generalized cross-validation [32] and supervised learning techniques [53,1]. Some other approaches are statistically-based and require that an estimate of the variance of the noise in the data, assumed to be normal, is known.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations