2014
DOI: 10.1007/s10994-014-5437-0
|View full text |Cite
|
Sign up to set email alerts
|

Asymptotic analysis of the learning curve for Gaussian process regression

Abstract: This paper deals with the learning curve in a Gaussian process regression framework. The learning curve describes the generalization error of the Gaussian process used for the regression. The main result is the proof of a theorem giving the generalization error for a large class of correlation kernels and for any dimension when the number of observations is large. From this theorem, we can deduce the asymptotic behavior of the generalization error when the observation error is small. The presented proof genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…We note that the integrated mean squared error (IMSE) based criteria are often used for constructing experimental designs such that the resulting metamodel can achieve satisfactory predictive performance across the design space. For kriging prediction with nugget effect, Gratiet and Garnier (2015) provide theoretical results on the asymptotic values of pointwise MSE and IMSE and obtain the convergence rates of IMSE for selected kernels as the number of design points increases to infinity; they further discuss how the theoretical results can be used to determine the total budget required to achieve a pre-specified accuracy level in terms of IMSE. The differences between their work and our results given in Theorem 1 lie on the following aspects.…”
Section: Some Properties Of Stochastic Krigingmentioning
confidence: 99%
See 1 more Smart Citation
“…We note that the integrated mean squared error (IMSE) based criteria are often used for constructing experimental designs such that the resulting metamodel can achieve satisfactory predictive performance across the design space. For kriging prediction with nugget effect, Gratiet and Garnier (2015) provide theoretical results on the asymptotic values of pointwise MSE and IMSE and obtain the convergence rates of IMSE for selected kernels as the number of design points increases to infinity; they further discuss how the theoretical results can be used to determine the total budget required to achieve a pre-specified accuracy level in terms of IMSE. The differences between their work and our results given in Theorem 1 lie on the following aspects.…”
Section: Some Properties Of Stochastic Krigingmentioning
confidence: 99%
“…With respect to metamodeling for mean response prediction in the stochastic simulation setting, some in-depth work has been conducted regarding asymptotic properties of kriging prediction with nugget and the implications on nonsequential experimental designs with a large simulation budget are provided (Gratiet and Garnier, 2015). However, a systematic account of sequential designs with a fixed simulation budget has yet to be established, despite the earlier efforts by van Beers and Kleijnen (2008), Ng and Yin (2012), Ajdari and Mahlooji (2014) and Mehdad and Kleijnen (2015).…”
Section: Introductionmentioning
confidence: 99%
“…The case where Y is observed exactly is treated by this framework by letting δ 0 = 0. Otherwise, letting δ 0 > 0 can correspond for instance to measure errors (Bachoc et al, 2014) or to Monte Carlo computer experiments (Le Gratiet and Garnier, 2014). Note also that the case of a Gaussian process with discontinuous covariance function at 0 (nugget effect) is mathematically equivalent to this framework if the observation points X 1 , ..., X n are two by two distinct.…”
Section: Presentation and Notation For The Covariance Modelmentioning
confidence: 99%
“…As a by-product of the proof of Theorem 3.1, the upper bound for var DISK can be used to show that the integrated predictive variance of GP decreases to zero as the subset sample size m → ∞ for various types of covariance kernels. A closely related work by Gratiet and Garnier (2015) studies the asymptotic behavior for the mean squared error of GP, but unrealistically assumes that the error variance increases with the sample size, which prevents their predictive variance of GP from converging to zero.…”
Section: Bayes L 2 -Risk Of Disk: Convergence Rates and The Choice Of Kmentioning
confidence: 99%