2021
DOI: 10.1002/pamm.202000357
|View full text |Cite
|
Sign up to set email alerts
|

Sensor selection for hyper‐parameterized linear Bayesian inverse problems

Abstract: Models of physical processes often depend on parameters, such as material properties or source terms, that are only known with some uncertainty. Measurement data can be used to estimate these parameters and thereby improve the model's credibility. When measurements become expensive, it is important to choose the most informative data. This task becomes even more challenging when the model configurations vary and the data noise is correlated. In this poster we summarize our results in [1] and present an observa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 2 publications
0
1
0
Order By: Relevance
“…After measuring the scattering from the reference ground, the MUT is introduced, and the scattering parameters are measured. The material characterization is treated as the inverse problem, and the parameter uncertainties are accounted for in the inversion method [18][19][20]. An iterative inversion procedure minimizes the residual between the assumed data and measured (observed) data, as shown in Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…After measuring the scattering from the reference ground, the MUT is introduced, and the scattering parameters are measured. The material characterization is treated as the inverse problem, and the parameter uncertainties are accounted for in the inversion method [18][19][20]. An iterative inversion procedure minimizes the residual between the assumed data and measured (observed) data, as shown in Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…To address these computational challenges, different classes of methods have been developed by exploiting (1) sparsity via polynomial chaos approximation of parameter-to-observable maps [4][5][6], (2) Laplace approximation of the posterior [7][8][9][10][11], (3) intrinsic low dimensionality by low-rank approximation of (prior-preconditioned and data-informed) operators [7,[12][13][14][15][16], (4) decomposibility by offline (for PDE-constrained approximation)-online (for design optimization) decomposition [17,18], and (5) surrogate models of the PDEs, parameter-to-observable map, or posterior distribution by model reduction [19,20] and deep learning [21][22][23].…”
Section: Introductionmentioning
confidence: 99%