2021
DOI: 10.1007/s11075-021-01087-9
|View full text |Cite
|
Sign up to set email alerts
|

Generalized cross validation for ℓp-ℓq minimization

Abstract: Discrete ill-posed inverse problems arise in various areas of science and engineering. The presence of noise in the data often makes it difficult to compute an accurate approximate solution. To reduce the sensitivity of the computed solution to the noise, one replaces the original problem by a nearby well-posed minimization problem, whose solution is less sensitive to the noise in the data than the solution of the original problem. This replacement is known as regularization. We consider the situation when the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 16 publications
(15 citation statements)
references
References 39 publications
0
15
0
Order By: Relevance
“…We refer to the solution subspace V k+1 = range(V k+1 ) as a generalized Krylov subspace. Note that the computation of r (k+1) requires only one matrix-vector product with A T and L T , since we can exploit the QR factorizations (11) and the relation ( 8) to avoid computing any other matrix-vector products with the matrices A and L. Moreover, we store and update the "skinny" matrices AV k and LV k at each iteration to reduce the computational cost. The initial space V 1 is usually chosen to contain a few selected vectors and to be of small dimension.…”
Section: Majorization-minimization In Generalized Krylov Subspacesmentioning
confidence: 99%
See 3 more Smart Citations
“…We refer to the solution subspace V k+1 = range(V k+1 ) as a generalized Krylov subspace. Note that the computation of r (k+1) requires only one matrix-vector product with A T and L T , since we can exploit the QR factorizations (11) and the relation ( 8) to avoid computing any other matrix-vector products with the matrices A and L. Moreover, we store and update the "skinny" matrices AV k and LV k at each iteration to reduce the computational cost. The initial space V 1 is usually chosen to contain a few selected vectors and to be of small dimension.…”
Section: Majorization-minimization In Generalized Krylov Subspacesmentioning
confidence: 99%
“…This subsection summarizes the approach presented in [11]. Similarly as above, we consider a non-stationary method for determining μ.…”
Section: Generalized Cross Validationmentioning
confidence: 99%
See 2 more Smart Citations
“…However, computing the SVD might not be computationally attractive or might even prohibitive for large scale problems, and therefore, they also computed the GCV function by the aid of a Lanczos type method. Other references where efficient ways to compute the GCV can be found in [6,19,20]. For problems of the form like (2.1) with p = 2, GCV chooses the value λ that minimizes the function…”
mentioning
confidence: 99%