2005
DOI: 10.1007/978-3-540-30560-6_4
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Some Methods for Reduced Rank Gaussian Process Regression

Abstract: While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. Whil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
187
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 135 publications
(187 citation statements)
references
References 6 publications
0
187
0
Order By: Relevance
“…Inverting the n × n matrix [K K K + σ 2 I] takes time O(n 3 ), and has to be done in every step of the hyper-parameter optimization. Various approximations can be used to reduce this to O(n 2 ); see, e.g., Quinonero-Candela et al (2007). We refer to the process of optimizing hyper-parameters and computing the inverse as fitting the model.…”
Section: General Gaussian Process Regressionmentioning
confidence: 99%
“…Inverting the n × n matrix [K K K + σ 2 I] takes time O(n 3 ), and has to be done in every step of the hyper-parameter optimization. Various approximations can be used to reduce this to O(n 2 ); see, e.g., Quinonero-Candela et al (2007). We refer to the process of optimizing hyper-parameters and computing the inverse as fitting the model.…”
Section: General Gaussian Process Regressionmentioning
confidence: 99%
“…In this section, we propose an approach that is based on a similar kind of idea as the subset of regressors method (see e.g. Poggio and Girosi 1990;Smola and Schölkopf 2000;Rifkin et al 2003;Quiñonero-Candela et al 2007) for the standard regularized least-squares regression. More detailed considerations and experimental results of this approach for RankRLS are presented in Tsivtsivadze et al (2008).…”
Section: Sparse Approximationmentioning
confidence: 99%
“…RLS-based learning algorithms can also be extended for large-scale learning using the subset of regressors method (see e.g. Quiñonero-Candela et al 2007;Tsivtsivadze et al 2008). Further advantages include the possibility to learn several functions in parallel as considered by Rifkin and Klautau (2004).…”
Section: Introductionmentioning
confidence: 99%
“…The main bottleneck is the inversion of the covariance matrix which has computational complexity O(n 3 ), where n is the number of data points used to construct the GP. One popular solution is to use a sparse approximation of the full GP and a number of available methods are compared and analyzed in [28]. This paper picks up on the ideas used in [29] where an assumption is made about the structure of the kernel function instead of the conditional distribution or the likelihood.…”
Section: ) Sparse Approximation Methodsmentioning
confidence: 99%