2020
DOI: 10.48550/arxiv.2011.07255
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Factorized Gaussian Process Variational Autoencoders

Abstract: Variational autoencoders often assume isotropic Gaussian priors and mean-field posteriors, hence do not exploit structure in scenarios where we may expect similarity or consistency across latent variables. Gaussian process variational autoencoders alleviate this problem through the use of a latent Gaussian process, but lead to a cubic inference time complexity. We propose a more scalable extension of these models by leveraging the independence of the auxiliary features, which is present in many datasets. Our m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…However, this operation can be made more scalable, either through the use of inducing point methods (Ashman et al, 2020;Jazbec et al, 2021) (cf. Section 2.2) or through factorised kernels (Jazbec et al, 2020). Moreover, depending on the prior knowledge of the generative process, these models can also be extended to use additive GP priors (Ramchandran et al, 2020) or tensor-valued ones (Campbell & Liò, 2020).…”
Section: Distributional Variational Autoencoder Priorsmentioning
confidence: 99%
“…However, this operation can be made more scalable, either through the use of inducing point methods (Ashman et al, 2020;Jazbec et al, 2021) (cf. Section 2.2) or through factorised kernels (Jazbec et al, 2020). Moreover, depending on the prior knowledge of the generative process, these models can also be extended to use additive GP priors (Ramchandran et al, 2020) or tensor-valued ones (Campbell & Liò, 2020).…”
Section: Distributional Variational Autoencoder Priorsmentioning
confidence: 99%
“…( 6)). However, this operation can be made more scalable, either through the use of inducing point methods [116,117] (c.f., Section 2.2) or through factorized kernels [118]. Moreover, depending on the prior knowledge of the generative process, these models can also be extended to use additive GP priors [119] or tensor-valued ones [120].…”
Section: Distributional Vae Priorsmentioning
confidence: 99%