2001
DOI: 10.1162/08997660151134343
|View full text |Cite
|
Sign up to set email alerts
|

Predictive Approaches for Choosing Hyperparameters in Gaussian Processes

Abstract: Gaussian processes are powerful regression models specified by parameterized mean and covariance functions. Standard approaches to choose these parameters (known by the name hyperparameters) are maximum likelihood and maximum a posteriori. In this article, we propose and investigate predictive approaches based on Geisser's predictive sample reuse (PSR) methodology and the related Stone's cross-validation (CV) methodology. More specifically, we derive results for Geisser's surrogate predictive probability (GPP)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
90
1

Year Published

2004
2004
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(91 citation statements)
references
References 8 publications
0
90
1
Order By: Relevance
“…On the other hand, the problem in Eq. (14) is similar to that of the regularized Kernel Ridge Regression (KRR) [21] and has a closed-form solution:…”
Section: Vc-gpm: Learning and Inferencementioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, the problem in Eq. (14) is similar to that of the regularized Kernel Ridge Regression (KRR) [21] and has a closed-form solution:…”
Section: Vc-gpm: Learning and Inferencementioning
confidence: 99%
“…To alleviate this, we use the notion of the Leave-One-Out (LOO) cross-validation procedure for the KRR [21] to define learning of γ (v) and λ (v) , and, thus, obtain A (v) indirectly. The learning in LOO is based on the fact that given any training set and the corresponding regression model, if we add a sample to the training set with the target equal to the output predicted by the model, the latter will not change since the cost will not increase [21]. Thus, given the training set with the sample y …”
Section: Vc-gpm: Learning and Inferencementioning
confidence: 99%
“…In our current system, we employ leave-one-out cross-validation to adapt the hyper-parameters. Alternative cross-validation approaches in the context of GPs are described by Sundararajan and Keerthi [10]. In general, one seeks to find hyper-parameters that minimize the average loss of all training samples given a predefined optimization criterion.…”
Section: Gp Model With Individual Noise Levelsmentioning
confidence: 99%
“…An alternative way of inferring θ is to use a Bayesian variant of the leave-oneout error (GPP, Geisser's surrogate predictive probability, [4]). In our study we will use both methods, choosing the most appropriate one for each of our two covariance functions.…”
Section: Gaussian Process Regressionmentioning
confidence: 99%
“…Indeed, the lengthscales λ d can grow to eliminate the contribution of any irrelevant input dimension. The parameters σ 2 ν , σ 2 and g of the polynomial covariance function were estimated by maximizing the GPP criterion [4]. The parameters σ 2 ν , σ 2 and the λ d of the squared exponential kernel were estimated by maximizing their marginal log likelihood [5].…”
Section: Gaussian Process Regressionmentioning
confidence: 99%