2019
DOI: 10.1080/01630563.2019.1604546
|View full text |Cite
|
Sign up to set email alerts
|

Heuristic Parameter Choice Rules for Tikhonov Regularization with Weakly Bounded Noise

Abstract: We study the choice of the regularisation parameter for linear ill-posed problems in the presence of noise that is possibly unbounded but only finite in a weaker norm, and when the noise-level is unknown. For this task, we analyse several heuristic parameter choice rules, such as the quasi-optimality, heuristic discrepancy, and Hanke-Raus rules and adapt the latter two to the weakly bounded noise case. We prove convergence and convergence rates under certain noise conditions. Moreover, we analyse and provide c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…In case the noise level δ is unknown or estimates of it are unreliable, then instead of the discrepancy principle one can use heuristic stopping rules such as the heuristic discrepancy principle [23,24], which determines the stopping index k * by minimizing…”
Section: Iterative Regularization Approachmentioning
confidence: 99%
“…In case the noise level δ is unknown or estimates of it are unreliable, then instead of the discrepancy principle one can use heuristic stopping rules such as the heuristic discrepancy principle [23,24], which determines the stopping index k * by minimizing…”
Section: Iterative Regularization Approachmentioning
confidence: 99%
“…In case the noise level δ is unknown or estimates of it are unreliable, then instead of the discrepancy principle one can use heuristic stopping rules such as the heuristic discrepancy principle [23,24], which determines the stopping index k * by minimizing…”
Section: Iterative Regularization Approachmentioning
confidence: 99%
“…We omit the proof as it is analogous to the above. Note that if the source condition (10) holds, α * is selected according to the right quasi-optimality rule and the auto-regularisation condition (27) is satisfied, then one may also prove that…”
Section: The Right Quasi-optimality Rulementioning
confidence: 99%
“…It is important to note that the aforementioned noise conditions utilised the spectral theory for self-adjoint linear operators. A recent discussion and extension of the noise conditions within the linear theory may be found in [27].…”
Section: Introductionmentioning
confidence: 99%