2019
DOI: 10.1115/1.4043930
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Optimal Design of Experiments for Inferring the Statistical Expectation of Expensive Black-Box Functions

Abstract: Bayesian optimal design of experiments (BODE) has been successful in acquiring information about a quantity of interest (QoI) which depends on a black-box function. BODE is characterized by sequentially querying the function at specific designs selected by an infill-sampling criterion. However, most current BODE methods operate in specific contexts like optimization, or learning a universal representation of the black-box function. The objective of this paper is to design a BODE for estimating the statistical … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
33
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(34 citation statements)
references
References 59 publications
1
33
0
Order By: Relevance
“…By using the Github code from the authors of Ref. [1] for their test cases, we have confirmed that the new results based on Eq. ( 15) are the same as the original results based on Eq.…”
Section: Derivation Of the Simplified Acquisition G(x)supporting
confidence: 71%
See 2 more Smart Citations
“…By using the Github code from the authors of Ref. [1] for their test cases, we have confirmed that the new results based on Eq. ( 15) are the same as the original results based on Eq.…”
Section: Derivation Of the Simplified Acquisition G(x)supporting
confidence: 71%
“…with s and Λ involving hyperparameters of the kernel function (with either optimized values from training or selected values as in Ref. [1]). The main computation is then Eq.…”
Section: Derivation Of the Simplified Acquisition G(x)mentioning
confidence: 99%
See 1 more Smart Citation
“…This allowed us to utilize gradient based optimization methods. For the adaptive strategies based on the mutual information, I(x, y|D, h) and its second-order approximation, I G (x, y|D, h), we used a random sampling approach and equations (12) and (14), respectively. Specifically, we generated 10 4 samples from the input distribution x and utilized the exact expression:…”
Section: Linear Map With a 2d Input Spacementioning
confidence: 99%
“…The problem has been studied extensively using criteria relying on mutual information theory or the Kullback-Leibler (KL) divergence (e.g. [15]). More recently another criterion was introduced focusing on the rapid convergence of the output statistics [16].…”
Section: Introductionmentioning
confidence: 99%