2020
DOI: 10.1016/j.cageo.2020.104427
|View full text |Cite
|
Sign up to set email alerts
|

Comparing RSVD and Krylov methods for linear inverse problems

Abstract: In this work we address regularization parameter estimation for ill-posed linear inverse problems with an 2 penalty. Regularization parameter selection is of utmost importance for all of inverse problems and estimating it generally relies on the experience of the practitioner. For regularization with an 2 penalty there exist a lot of parameter selection methods that exploit the fact that the solution and the residual can be written in explicit form. Parameter selection methods are functionals that depend on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…This equation can be calculated more efficiently through the Krylov model reduction method introduced in (7). Using (34), we can redefine (29) as follows:…”
Section: A Regularization Parameter For Stabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…This equation can be calculated more efficiently through the Krylov model reduction method introduced in (7). Using (34), we can redefine (29) as follows:…”
Section: A Regularization Parameter For Stabilitymentioning
confidence: 99%
“…In addition, measurement errors should be carefully controlled in VTS because small errors are dramatically am-plified through the inverse problem. To address this issue, many researchers have developed the stabilizers such as the sequential function specification method (SFSM) [9], regularization method (RM) [30], [31], iterative regularization (IR) [32], combined SFSM-RM [33], and randomized singular value decomposition [34]. These methods provide good stabilization performance, but those may require additional computational burden at every time step to reduce the ill-conditioning of the gain coefficient matrix.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, in recent years, various methods have aimed to reduce computational costs, e.g. through trace estimation and other randomized linear algebra approaches [60]. However, for many large-scale problems, the burden of computing a suitable regularization parameter λ still remains.…”
Section: Standard Tikhonov Regularizationmentioning
confidence: 99%