2021
DOI: 10.48550/arxiv.2108.12515
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convergence Rates for Learning Linear Operators from Noisy Data

Abstract: We study the Bayesian inverse problem of learning a linear operator on a Hilbert space from its noisy pointwise evaluations on random input data. Our framework assumes that this target operator is selfadjoint and diagonal in a basis shared with the Gaussian prior and noise covariance operators arising from the imposed statistical model and is able to handle target operators that are compact, bounded, or even unbounded. We establish posterior contraction rates with respect to a family of Bochner norms as the nu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…Assumption 6 (Hölder input and output). Let X = Y = L 2 ([−1, 1] D ) with the inner product (21). For some integer k > 0 and 0 < α ≤ 1, the support of the probability measure γ and the pushforward measure Ψ # γ satisfies…”
Section: Legendre Polynomialsmentioning
confidence: 99%
See 1 more Smart Citation
“…Assumption 6 (Hölder input and output). Let X = Y = L 2 ([−1, 1] D ) with the inner product (21). For some integer k > 0 and 0 < α ≤ 1, the support of the probability measure γ and the pushforward measure Ψ # γ satisfies…”
Section: Legendre Polynomialsmentioning
confidence: 99%
“…The generalization error in [51] is a posteriori depending on the properties of neural networks fitting the target operator. Recently, the posterior rates on learning linear operators by Bayesian inversion have been studied in [21].…”
Section: Introductionmentioning
confidence: 99%
“…We further assume p i ∝ i -p and q i ∝ i -q . This commuting assumptions also made in (Cabannes et al 2021;Hoop et al 2021). due to the Bochner's theorem.…”
Section: Problem Formulationmentioning
confidence: 64%
“…The main theoretical contribution of this work is a rigorous bound on the error propagation during the procedure outlined above, leading to the accuracy-vs-complexity results advertised in the abstract. This exponential accuracy is vastly better than what can be hoped for when using global low-rank approximations such as [6,2,27]. The principles underlying [16] are very similar to 2, 3 above.…”
Section: Learning Operators With Neural Network and Operator-valued K...mentioning
confidence: 87%
“…More closely related to the present work are methods providing theoretical guarantees for learning structured linear operators. [6] approximates operators between infinite-dimensional Hilbert spaces from noisy measurements using a Bayesian approach and characterizes convergence (when randomized right hand-sides), in terms of the spectral decay of the target operator (assuming the operator to be self-adjoint and diagonal in a basis shared with the Gaussian prior and noise covariance). [27] seeks to learn the Green's function of an elliptic PDE by empirical risk minimization over a reproducing kernel Hilbert space, characterizing convergence in terms of the spectral decay.…”
Section: Learning Operators With Neural Network and Operator-valued K...mentioning
confidence: 99%