2019 International Conference of Artificial Intelligence and Information Technology (ICAIIT) 2019
DOI: 10.1109/icaiit.2019.8834544
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study on Variational Autoencoders and Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…To model the cognitive processes required for the reconstruction and cross-modal generation tasks, we employed a mixture-of-experts multimodal variational autoencoder (MMVAE; Shi et al, 2019 ). The MMVAE is one of the generative models for multimodal learning that exhibits high performance in terms of generation quality.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To model the cognitive processes required for the reconstruction and cross-modal generation tasks, we employed a mixture-of-experts multimodal variational autoencoder (MMVAE; Shi et al, 2019 ). The MMVAE is one of the generative models for multimodal learning that exhibits high performance in terms of generation quality.…”
Section: Methodsmentioning
confidence: 99%
“…Both models learned the latent representations of , but in different ways. For training, MMVAE maximizes the following objective function ( Shi et al, 2019 ):…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead of only learning the compressed image, VAE learns the distribution of the data, and by exploiting the distribution, we can decode and produce new data. VAEs (Variational Auto-encoders) have also been highly successful, to the point where they are frequently mathematically more accurate at producing images that closely resemble their original dataset [9].…”
Section: Auto Encoders Based Image Synthesismentioning
confidence: 99%
“…An architecture made up of both an encoder and a decoder that is trained to minimize the reconstruction error between the encoded-decoded data and the starting data is known as a Variational Auto encoder (VAE) [9]. Instead of encoding an input as a single point, we encode it as a distribution over the latent space in order to introduce some regularization of the latent space.…”
Section: Vaementioning
confidence: 99%