2020
DOI: 10.48550/arxiv.2012.08015
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Active Learning for Deep Gaussian Process Surrogates

Abstract: Deep Gaussian processes (DGPs) are increasingly popular as predictive models in machine learning (ML) for their non-stationary flexibility and ability to cope with abrupt regime changes in training data. Here we explore DGPs as surrogates for computer simulation experiments whose response surfaces exhibit similar characteristics. In particular, we transport a DGP's automatic warping of the input space and full uncertainty quantification (UQ), via a novel elliptical slice sampling (ESS) Bayesian posterior infer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 41 publications
(61 reference statements)
0
16
0
Order By: Relevance
“…Learning a 1D manifold in an analogue to AS is another option not yet transposed to GPs (Bridges et al, 2019), while local AS analysis might be a simpler way to go Wycoff (2021b). In this line, it is also possible to include generative topographic mapping (Viswanath et al, 2011), deep GPs (Damianou and Lawrence, 2013;Hebbal et al, 2019;Sauer et al, 2020) or GP latent variable models, e.g., (Lawrence, 2005;Titsias and Lawrence, 2010). An orthogonal direction to avoid the larger optimization budgets is via multi-fidelity, when cheaper but less accurate version(s) of the black-box are available, as exploited e.g., by Ginsbourger et al (2012); Falkner et al (2018).…”
Section: Non-linear Embeddings and Structured Spacesmentioning
confidence: 99%
“…Learning a 1D manifold in an analogue to AS is another option not yet transposed to GPs (Bridges et al, 2019), while local AS analysis might be a simpler way to go Wycoff (2021b). In this line, it is also possible to include generative topographic mapping (Viswanath et al, 2011), deep GPs (Damianou and Lawrence, 2013;Hebbal et al, 2019;Sauer et al, 2020) or GP latent variable models, e.g., (Lawrence, 2005;Titsias and Lawrence, 2010). An orthogonal direction to avoid the larger optimization budgets is via multi-fidelity, when cheaper but less accurate version(s) of the black-box are available, as exploited e.g., by Ginsbourger et al (2012); Falkner et al (2018).…”
Section: Non-linear Embeddings and Structured Spacesmentioning
confidence: 99%
“…Their prowess has been demonstrated on many classification tasks (Damianou and Lawrence, 2013;Fei et al, 2018;Yang and Klabjan, 2021). Compared with traditional DNNs, the flexibility in uncertainty quantification of DGPs makes them ideal candidate for surrogate modeling (Radaideh and Kozlowski, 2020;Sauer et al, 2020). DGPs are also commonly used tool in Bayesian optimization (Hebbal et al, 2021), multi-fidelity analysis (Ko and Kim, 2021), healthcare (Li et al, 2021), and etc.…”
Section: Literature Reviewmentioning
confidence: 99%
“…One focuses on designing more efficient and accurate training algorithm while the other one on constructing DGP architectures with sparsity. Efficient inference and training algorithms includes expectation propagation (Bui et al, 2016), doubly stochastic variational inference (Salimbeni and Deisenroth, 2017), stochastic gradient Hamiltonian Monte Carlo (Havasi et al, 2018), elliptical slice sampling (Sauer et al, 2020), and etc. Reformulation of DGPs to equivalent models is an alternative approach.…”
Section: Literature Reviewmentioning
confidence: 99%
“…As a result, whilst DSVI methods offer computational tractability, this could come at the expense of accurate uncertainty quantification (UQ) for the latent posteriors. To address these drawbacks, Sauer et al (2020) provide a fully Bayesian (FB) inference that accounts for various uncertainties in the construction of DGP surrogates. However, the FB framework implemented in Sauer et al (2020) is restricted to limited DGP specifications, e.g., no more than three-layered DGPs with only squared exponential kernels, that potentially prevent wide application of DGP in surrogate modeling and UQ exercises.…”
Section: Introductionmentioning
confidence: 99%