2021
DOI: 10.48550/arxiv.2107.01590
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Gaussian Process Emulation using Stochastic Imputation

Abstract: We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using stochastic imputation. By stochastically imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs. This transformation renders a simple while efficient DGP training procedure that only involves optimizations of conventional stationary GPs. In addition, the analytically tractable mean and variance of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…The most common emulator is the Gaussian process emulator (Kennedy & O'Hagan, 2001), where the map between the input parameters and the numerical model outputs is modeled via a Gaussian process. Several developments on the vanilla Gaussian process have been proposed for the purpose of emulation, such as the treed Gaussian process (Gramacy & Lee, 2008) and the deep Gaussian process of Damianou & Lawrence (2013) or variants thereof (Monterrubio-Gómez et al, 2020, Ming et al, 2021, Marmin & Filippone, 2022, Sauer et al, 2022.…”
Section: Deep Emulationmentioning
confidence: 99%
“…The most common emulator is the Gaussian process emulator (Kennedy & O'Hagan, 2001), where the map between the input parameters and the numerical model outputs is modeled via a Gaussian process. Several developments on the vanilla Gaussian process have been proposed for the purpose of emulation, such as the treed Gaussian process (Gramacy & Lee, 2008) and the deep Gaussian process of Damianou & Lawrence (2013) or variants thereof (Monterrubio-Gómez et al, 2020, Ming et al, 2021, Marmin & Filippone, 2022, Sauer et al, 2022.…”
Section: Deep Emulationmentioning
confidence: 99%
“…e.g. when too little training data are available or when the assumption of a smooth functional mapping which be violated (for which we suggest the use of deep GPs [46,47].…”
mentioning
confidence: 99%