2006
DOI: 10.1090/s0025-5718-06-01873-4
|View full text |Cite
|
Sign up to set email alerts
|

Regularization of some linear ill-posed problems with discretized random noisy data

Abstract: Abstract. For linear statistical ill-posed problems in Hilbert spaces we introduce an adaptive procedure to recover the unknown solution from indirect discrete and noisy data. This procedure is shown to be order optimal for a large class of problems. Smoothness of the solution is measured in terms of general source conditions. The concept of operator monotone functions turns out to be an important tool for the analysis.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
85
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 91 publications
(86 citation statements)
references
References 15 publications
1
85
0
Order By: Relevance
“…Thus, one needs to find a balance between the approximation and the noise propagation errors. This is achieved by presenting the a posteriori parameter choice rule, which is based on the so-called balancing principle that has been extensively studied (see, for example, [11], [20] and references therein).…”
Section: Adaptive Parameter Choicementioning
confidence: 99%
See 1 more Smart Citation
“…Thus, one needs to find a balance between the approximation and the noise propagation errors. This is achieved by presenting the a posteriori parameter choice rule, which is based on the so-called balancing principle that has been extensively studied (see, for example, [11], [20] and references therein).…”
Section: Adaptive Parameter Choicementioning
confidence: 99%
“…Definition 2 Following [20], we say that a function ϕ(n) = ϕ(n; f, δ) is admissible for given f and δ if the following holds…”
Section: Adaptive Parameter Choicementioning
confidence: 99%
“…Since then it has been developed further for regularization of linear inverse problems [50,135,105,106,18] and nonlinear inverse problems [12,13] in deterministic and stochastic settings. The notation we will use is taken from [9].…”
Section: Balancing Principlementioning
confidence: 99%
“…The idea of our adaptive principle has its origin in the paper [11], devoted to statistical estimation of function y(t) with unknown Hölder smoothness from direct observation blurred by Gaussian white noise. In the context of general ill-posed problems this idea has been realized in [5], [25], [8], [15]- [17], [20]. We use this idea for the adaptive choice of the stepsize in finite-difference methods because the structure of the error estimate (2.5) is very similar to the loss function of statistical estimation, where some parameter always controls the tradeoff between the bias and the variance of the risk.…”
Section: How Do We Approximate a Derivative Y (T) Of A Smooth Functiomentioning
confidence: 99%