2020
DOI: 10.1016/j.automatica.2020.109127
|View full text |Cite
|
Sign up to set email alerts
|

Recursive estimation for sparse Gaussian process regression

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 19 publications
0
20
0
Order By: Relevance
“…As future work, we plan to investigate the possibility of using inducing points, as for sparse GPs (Quiñonero-Candela & Rasmussen, 2005;Snelson & Ghahramani, 2006;Titsias, 2009;Hensman et al, 2013;Hernández-Lobato & Hernández-Lobato, 2016;Bauer et al, 2016;Schuerch et al, 2020), to reduce the computational load for matrix operations (complexity O(n 3 ) with storage demands of O(n 2 ) ). We also plan to derive tighter approximations of the marginal likelihood.…”
Section: Discussionmentioning
confidence: 99%
“…As future work, we plan to investigate the possibility of using inducing points, as for sparse GPs (Quiñonero-Candela & Rasmussen, 2005;Snelson & Ghahramani, 2006;Titsias, 2009;Hensman et al, 2013;Hernández-Lobato & Hernández-Lobato, 2016;Bauer et al, 2016;Schuerch et al, 2020), to reduce the computational load for matrix operations (complexity O(n 3 ) with storage demands of O(n 2 ) ). We also plan to derive tighter approximations of the marginal likelihood.…”
Section: Discussionmentioning
confidence: 99%
“…In the numerical experiments, we have used a full GP whose computational load grows cubically as the size of the training set increases. Sparse GPs can be employed to address this issue [Bauer et al 2016;Hensman et al 2013;Hernández-Lobato and Hernández-Lobato 2016;Quiñonero-Candela and Rasmussen 2005;Schuerch et al 2020;Snelson and Ghahramani 2006;Titsias 2009] when it is necessary to sample thousands of samples.…”
Section: Discussionmentioning
confidence: 99%
“…This results in an ever growing set of retained test points. Alternatively, one could implement Bayesian filtering techniques explained in [48,49] to combine the previously calculated posterior with a new one based on only one data point and a new set of test points.…”
Section: Discussionmentioning
confidence: 99%