2020
DOI: 10.48550/arxiv.2008.12595
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dynamical Variational Autoencoders: A Comprehensive Review

Laurent Girin,
Simon Leglaive,
Xiaoyu Bie
et al.

Abstract: The Variational Autoencoder (VAE) is a powerful deep generative model that is now extensively used to represent high-dimensional complex data via a lowdimensional latent space that is learned in an unsupervised manner. In the original VAE model, input data vectors are processed independently. In the recent years, a series of papers have presented different extensions of the VAE to sequential data, that not only model the latent space, but also model the temporal dependencies within a sequence of data vectors a… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0
9

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(72 citation statements)
references
References 58 publications
0
63
0
9
Order By: Relevance
“…The SRNN model is chosen to represent the "classical" dynamic VAE model: it utilizes RNNs as encoder and decoder and models the internal dynamics of the inferred latent sequence with an explicit transition model. In the comprehensive comparison between DVAE models provided by Girin et al [18] it emerges as the most performative model, leading us to select it as the representative for this class of generative models.…”
Section: Baseline Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…The SRNN model is chosen to represent the "classical" dynamic VAE model: it utilizes RNNs as encoder and decoder and models the internal dynamics of the inferred latent sequence with an explicit transition model. In the comprehensive comparison between DVAE models provided by Girin et al [18] it emerges as the most performative model, leading us to select it as the representative for this class of generative models.…”
Section: Baseline Modelsmentioning
confidence: 99%
“…The choices of hyperparameters for each model can be found in Table B.5. For the dynamical VAE baselines (SRNN and KVAE), we largely used the implementations provided by Girin et al [18], with some slight changes and extensions. The implementation of our model, as well as the experimental pipeline necessary to reproduce our results, can be found at https://github.com/simonbing/HealthGen.…”
Section: B Implementation and Training Detailsmentioning
confidence: 99%
“…While these approaches successfully taught students how generative models work through creative applications, they focused on GANs, which are not the only generative AI model. In this work, we focus on VAEs (Kingma and Welling 2013), another powerful generative model extensively used to represent highdimensional data via a low-dimensional latent space (Girin et al 2020). Inspired by previous work, students' learning is guided through playing a simulated "Shadow Matching Game" that teaches students the constituents of a VAE, and an exploration with tools that use VAEs to create media.…”
Section: Creative Ai Educationmentioning
confidence: 99%
“…While RNNs form deterministic models, Dynamic Bayesian Networks, 20,21 accounting for temporal dependencies (such as Hidden Markov Models 22 ), form probabilistic approaches for learning the structure of generative models and enjoy widespread adoption for sequential data. Very recently, Dynamical Variational Autoencoders 23 emerged as a sequential version of variational autoencoders (VAE 24 ), or as a variational version of Dynamical Bayesian Networks, and have been applied to learning the latent space representation for high-dimensional sequential data in an unsupervised manner. These models typically parameterize the involved distributions by means of deep neural networks, which allow for learning high-dimensional and highly multi-modal distributions.…”
Section: Introductionmentioning
confidence: 99%