2021
DOI: 10.1007/s11222-021-10051-5
|View full text |Cite
|
Sign up to set email alerts
|

Consistent online Gaussian process regression without the sample complexity bottleneck

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…The state of the art in online GP includes other methods such as those based on local approximations (see e.g. [26,27]) and sparse methodologies whose strategy is based on an online reduction of the kernel matrix size through similarity criteria (prominently, the work in [28]). The first methodology is not sparse and its implementation to our applications is not direct, since its computational complexity increases indefinitely with time and, therefore, its application to our experimental setup will involve a pruning procedure to bound such complexity.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…The state of the art in online GP includes other methods such as those based on local approximations (see e.g. [26,27]) and sparse methodologies whose strategy is based on an online reduction of the kernel matrix size through similarity criteria (prominently, the work in [28]). The first methodology is not sparse and its implementation to our applications is not direct, since its computational complexity increases indefinitely with time and, therefore, its application to our experimental setup will involve a pruning procedure to bound such complexity.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…The aggregation model, on the other hand, considers the predictions of most of the submodels, smoothing the discontinuous predictions between two data subsets. In conclusion, the local approximation approach allows the model to be accelerated by distributed or parallel operations, and, the submodels facilitate the identification of local features such as non-stationarity and heteroskedasticity [20].…”
Section: Local Approximationmentioning
confidence: 99%
“…if KSD(q W\{xj } ) 2 < α 2 + ϵ then end if 11: end while 12: Output thinned dictionary W satisfying KSD(q W ) 2 < α 2 + ϵ representing a nonparametric posterior using only most representative samples has been shown to exhibit theoretical and numerical advantages in probability density estimation Broderick 2018, 2019), Gaussian Processes (Koppel, Pradhan, and Rajawat 2021), and Monte Carlo methods (Elvira, Míguez, and Djurić 2016). Here we introduce it for the first time in model-based RL, which allows us to control the growth of the posterior complexity, which in turn permits us to obtain computationally efficient updates.…”
Section: Posterior Coreset Construction Via Ksdmentioning
confidence: 99%