2010
DOI: 10.1016/j.csda.2009.05.026
|View full text |Cite
|
Sign up to set email alerts
|

Regularization parameter estimation for large-scale Tikhonov regularization using a priori information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
42
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 34 publications
(43 citation statements)
references
References 23 publications
(47 reference statements)
1
42
0
Order By: Relevance
“…Although the proof of the result on the degrees of freedom for the underdetermined case m < n effectively follows the ideas introduced [16,22], the modification presented here provides a stronger result which can also strengthen the result for the overdetermined case, m ≥ n.…”
Section: Theoretical Developmentmentioning
confidence: 64%
See 1 more Smart Citation
“…Although the proof of the result on the degrees of freedom for the underdetermined case m < n effectively follows the ideas introduced [16,22], the modification presented here provides a stronger result which can also strengthen the result for the overdetermined case, m ≥ n.…”
Section: Theoretical Developmentmentioning
confidence: 64%
“…The algorithm in [16] was presented for small scale problems in which one can use the singular value decomposition (SVD) [5] for matrix G when L = I or the generalized singular value decomposition (GSVD) [19] of the matrix pair [W d G; L]. For the large scale case an approach using the Golub-Kahan iterative bidiagonalization based on the LSQR algorithm [20,21] was presented in [22] along with the extension of the algorithm for the non-central destribution of P σ L (m Tik (σ L )), namely when m 0 is unknown but may be estimated from a set of measurements. In this paper the χ 2 principle is first extended to the estimation of σ L for underdetermined problems, specifically for the central χ 2 distribution with known m 0 , with the proof of the result in Section 2 and examples in Section 2.4.…”
Section: Introductionmentioning
confidence: 99%
“…This is an extension of the scalar χ 2 method [21,22,23,31] which can be viewed as a regularization method. The new method amounts to solving multiple χ 2 tests to give an equal number of equations as the number of unknowns in the diagonal weighting matrix for data or parameter misfits.…”
Section: Discussionmentioning
confidence: 99%
“…This statistical interpretation of the weights in (3) and multipliers in (8) gives us a method for calculating them, and we term it the χ 2 method for parameter estimation and uncertainty quantification [21,22,23,31].…”
Section: Regularization and Constrained Optimizationmentioning
confidence: 99%
See 1 more Smart Citation