2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) 2019
DOI: 10.1109/mlsp.2019.8918874
|View full text |Cite
|
Sign up to set email alerts
|

Hilbert-Space Reduced-Rank Methods For Deep Gaussian Processes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 20 publications
0
14
0
Order By: Relevance
“…Recently, several hierarchical models, which promote more versatile behaviours, have been developed. These include, for example, deep Gaussian processes (Dunlop et al, 2018;Emzir et al, 2019), level-set methods (Dunlop et al, 2017), mixtures of compound Poisson processes and Gaussians (Hosseini, 2017), and stacked Matérn fields via stochastic partial differential equations (Roininen et al, 2019). The problem with hierarchical priors is that in the posteriors the parameters and hyperparameters may become strongly coupled, which means that vanilla MCMC methods become problematic and, for example, re-parameterisations are needed for sampling the posterior efficiently (Chada et al, 2019;Monterrubio-Gómez et al, 2019).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Recently, several hierarchical models, which promote more versatile behaviours, have been developed. These include, for example, deep Gaussian processes (Dunlop et al, 2018;Emzir et al, 2019), level-set methods (Dunlop et al, 2017), mixtures of compound Poisson processes and Gaussians (Hosseini, 2017), and stacked Matérn fields via stochastic partial differential equations (Roininen et al, 2019). The problem with hierarchical priors is that in the posteriors the parameters and hyperparameters may become strongly coupled, which means that vanilla MCMC methods become problematic and, for example, re-parameterisations are needed for sampling the posterior efficiently (Chada et al, 2019;Monterrubio-Gómez et al, 2019).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The idea of putting GP priors on the hyperpameters of a GP (Heinonen et al 2016;Roininen et al 2019;Salimbeni and Deisenroth 2017b) can be continued hierarchically, which leads to one type of deep Gaussian process (DGP) construction (Dunlop et al 2018;Emzir et al 2019Emzir et al , 2020. Namely, the GP is conditioned on another GP, which again depends on another GP, and so forth.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we consider the challenge of accurate and fast computations in the presence of Gaussian processes with parameterized covariance kernels. In view of the literature on deep [7,9,12,11] and shallow [15] Gaussian processes, we refer to parameterized covariance kernels as shallow covariance kernels. Moreover, we denote the Gaussian process as outer layer and the distribution of the hyper-parameter as inner layer.…”
mentioning
confidence: 99%
“…For a given parameter θ ∈ Θ, we compute the Cholesky factor L ∈ R k×k of C(θ)(I, I). For a sample ξ ∼ N(0, Id k ) we have We now discuss the computational cost of drawing a sample from M (•|θ) via (12). First, we need to construct the matrices C(θ)(:, I), C(θ)(I, I) which comes at a cost of O(nks) if the linear expansion of C has s terms.…”
mentioning
confidence: 99%