2015
DOI: 10.1016/j.jcp.2015.08.006
|View full text |Cite
|
Sign up to set email alerts
|

Full scale multi-output Gaussian process emulator with nonseparable auto-covariance functions

Abstract: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 35 publications
0
11
0
Order By: Relevance
“…Basically, the BLHS guarantees a good mix of short and long pairwise distances. The method is similar in flavor to so-called full scale approximation (Sang and Huang, 2012;Zhang et al, 2015). It also has aspects in common with composite likelihood approaches (Varin et al, 2011;Gu and Berger, 2016).…”
Section: Bootstrapped Block Latin Hypercube Subsamplesmentioning
confidence: 99%
“…Basically, the BLHS guarantees a good mix of short and long pairwise distances. The method is similar in flavor to so-called full scale approximation (Sang and Huang, 2012;Zhang et al, 2015). It also has aspects in common with composite likelihood approaches (Varin et al, 2011;Gu and Berger, 2016).…”
Section: Bootstrapped Block Latin Hypercube Subsamplesmentioning
confidence: 99%
“…With sparse approximation, storing and manipulating directly the training data for inference as in standard GP [1] is no longer necessary, instead, we need only to utilize the posterior of the inducing points. In future works, it would be interesting to compare p(y t |D 0:t−1 ) in (22) with the true predictive distribution of the model outputs y t , which can be derived from the joint distribution of y t and D 0:t−1 . This could provide insights into the approximation error introduced by the use of the inducing point assumption.…”
Section: A Model Predictionmentioning
confidence: 99%
“…This could provide insights into the approximation error introduced by the use of the inducing point assumption. Substituting (10) and (20) into (22) and following the computations in the prediction step of the standard Kalman filter yield [53]…”
Section: A Model Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…MTGPs model temporal or spatial relationships among infinitely many random variables, as scalar GPs, but also account for the statistical dependence across different sources of data (or tasks) [3,4,5,6,7,8,9]. How to choose an appropriate kernel to jointly model the cross covariance between tasks and auto-covariance within each task is the core aspect of MTGPs design [3,10,11,12,5,13,14].…”
Section: Introductionmentioning
confidence: 99%