2021
DOI: 10.48550/arxiv.2110.02720
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient learning methods for large-scale optimal inversion design

Abstract: In this work, we investigate various approaches that use learning from training data to solve inverse problems, following a bilevel learning approach. We consider a general framework for optimal inversion design, where training data can be used to learn optimal regularization parameters, data fidelity terms, and regularizers, thereby resulting in superior variational regularization methods. In particular, we describe methods to learn optimal p and q norms for L p − L q regularization and methods to learn optim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 54 publications
(107 reference statements)
0
2
0
Order By: Relevance
“…We will extend the DF argument to be used to find µ using a standard bisection algorithm at relatively low cost, when the original model parameters can be assumed to be differentially Laplacian, as is the case for standard image deblurring problems. Fourth, we consider 2 − 1 norm regularization here; however, the developed approach extends also to other p − q norm problems [13] and even more general objective functions. We will investigate the convergence and numerical advantages and disadvantages of utilizing a variable projected approach for such p − q problems and for supervised learning loss function fitting within this framework.…”
Section: Discussionmentioning
confidence: 99%
“…We will extend the DF argument to be used to find µ using a standard bisection algorithm at relatively low cost, when the original model parameters can be assumed to be differentially Laplacian, as is the case for standard image deblurring problems. Fourth, we consider 2 − 1 norm regularization here; however, the developed approach extends also to other p − q norm problems [13] and even more general objective functions. We will investigate the convergence and numerical advantages and disadvantages of utilizing a variable projected approach for such p − q problems and for supervised learning loss function fitting within this framework.…”
Section: Discussionmentioning
confidence: 99%
“…Other well-known methods for choosing the parameter λ include the Generalized Cross Validation (GCV) [10], that chooses λ to maximize the accuracy with which we can predict the value of a pixel that has been omitted, the unbiased predictive risk estimator (UPRE) [19], and more recently, methods based on learning when training data is available [4,5].…”
Section: Figure 17 the Reconstructed Imagementioning
confidence: 99%