2023
DOI: 10.1016/j.patcog.2023.109530
|View full text |Cite
|
Sign up to set email alerts
|

Cloud-VAE: Variational autoencoder with concepts embedded

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…The variational autoencoder is a generative network structure based on the variational bayes inference. In contrast to the traditional autoencoder [6] that describes the latent space numerically, it describes observations of the latent space probabilistically, parameterizes the distribution using the mean and variance of latent variables, learns these parameters by maximizing the likelihood, and generates reconstructed data by decoding from that distribution [7]. It has shown great application in data generation.…”
Section: The Binary Variational Autoencodermentioning
confidence: 99%
“…The variational autoencoder is a generative network structure based on the variational bayes inference. In contrast to the traditional autoencoder [6] that describes the latent space numerically, it describes observations of the latent space probabilistically, parameterizes the distribution using the mean and variance of latent variables, learns these parameters by maximizing the likelihood, and generates reconstructed data by decoding from that distribution [7]. It has shown great application in data generation.…”
Section: The Binary Variational Autoencodermentioning
confidence: 99%
“…In this part, we evaluate the FID of different GAN algorithms to demonstrate the effectiveness of the proposed RCFCA-GAN algorithm. We train ACGAN [37], ACGAN with WGAN-GP loss function [30] (we name this algorithm as "WACGAN-GP"), WACGAN-GP using the single top-k method (we name this algorithm as "Single top-k") and the proposed algorithm implementing only Stage 1 (we name this algorithm as "RCFCA-GAN (Stage 1)"), respectively. The reason why we only implement Stage 1 in this experiment is that FID is used to evaluate the generation results of the GAN model.…”
Section: Fid Of Generated Imagesmentioning
confidence: 99%
“…AE encodes the data into a low-dimensional representation by an encoder and decodes the representation back to the original data. But, as AE uses a small latent dimension, it cannot fit complex distributions and some of its generation results are unsatisfying [30].…”
Section: Introductionmentioning
confidence: 99%