2020
DOI: 10.3390/make2030020
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Adversarial Variational Autoencoder

Abstract: We present a method to improve the reconstruction and generation performance of a variational autoencoder (VAE) by injecting an adversarial learning. Instead of comparing the reconstructed with the original data to calculate the reconstruction loss, we use a consistency principle for deep features. The main contributions are threefold. Firstly, our approach perfectly combines the two models, i.e., GAN and VAE, and thus improves the generation and reconstruction performance of the VAE. Secondly, the VAE trainin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 49 publications
0
4
0
Order By: Relevance
“…As mentioned in section 2, utilizing CVAE as a solitary classification method is fruitless because of the autoencoder is only tasked with regenerating input [18]. In order to implement discriminative modeling with the help of a deep generative model CVAE, the authors of this article have used a semi-supervised learning approach based on the study done in the article [19].…”
Section: Convolutional Variational Autoencoders As a Classifiermentioning
confidence: 99%
See 1 more Smart Citation
“…As mentioned in section 2, utilizing CVAE as a solitary classification method is fruitless because of the autoencoder is only tasked with regenerating input [18]. In order to implement discriminative modeling with the help of a deep generative model CVAE, the authors of this article have used a semi-supervised learning approach based on the study done in the article [19].…”
Section: Convolutional Variational Autoencoders As a Classifiermentioning
confidence: 99%
“…authors have investigated the effects of α and β on the testing performance of the network. According to[19], the best result has been obtained by the combination of values of α = 1 and β = 0.1. Authors have tested this combination against the identical combination (i.e.…”
mentioning
confidence: 99%
“…VAE differs from AE in that it provides a statistical manner to describe the samples of the dataset in latent space. Therefore, in VAE, the encoder outputs the probability distribution to the bottleneck layer instead of a single output value [57].…”
Section: Image Synthesismentioning
confidence: 99%
“…In the recent past, inspired by the success of deep long short-term memory (LSTM) models, some approaches similar to word2vec ( Mikolov et al, 2013 ) have been proposed to successfully learn latent space encoding directly from variable length sequences ( Ding et al, 2019 ). The direct sequence to latent space encoding method produces good generalization models ( Zemouri, 2020 ); however, they usually rely on the availability of a large training dataset. Furthermore, the direct extraction of latent space features from a limited number of sequences such as, bioluminescence ( Zhang et al, 2021 ), antioxidant ( Olsen et al, 2020 ), ECM ( Kabir et al, 2018 ), antifreeze proteins (AFPs) ( Kandaswamy et al, 2011 ), or other classes of proteins is a challenging problem.…”
Section: Introductionmentioning
confidence: 99%