2020
DOI: 10.48550/arxiv.2004.05472
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Autoencoding Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…For the case where the two databases have equal number of columns and onto mapping where smaller set of column is always a subset of the larger column set, we demonstrate the performance of proposed algorithm on samples from following two data generating distributions. Gaussians with mean vectors randomly sampled from [10,20] separately. The true factor dimension is taken to be k = 10.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…For the case where the two databases have equal number of columns and onto mapping where smaller set of column is always a subset of the larger column set, we demonstrate the performance of proposed algorithm on samples from following two data generating distributions. Gaussians with mean vectors randomly sampled from [10,20] separately. The true factor dimension is taken to be k = 10.…”
Section: Discussionmentioning
confidence: 99%
“…The Adversarial Generator-Encoder Networks (AGE) [33] model uses the divergence of two induced latent distributions as a kind of premade discriminator. Expanding this approach with multiple discriminators and generators/encoders such as done in [20] can stabilize the mapping of data points to a latent space and back, perhaps reducing the mode collapse and training difficulty of many GANs. Even though the above methods have the potential to generate augmenting data, they do not directly address all the use cases of a schema-matched dataset and do not exploit the relationships between features across databases.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This loss function for reconstruction is usually a function that calculates the pixel difference between input 𝑥 and output 𝑥 ̃. After training the autoencoder, it can be used for other purposes such as feature extraction [22]. Generative adversarial networks (GANs) are a class of generative and unsupervised neural networks that can approximate the distribution of data to generate a given dataset.…”
Section: Aeganmentioning
confidence: 99%
“…There are two main approaches for learning a bidirectional GAN: (1) adversarial feature learning (BiGAN) [8] or adversarially learned inference (ALI) [9]; (2) combining autoencoder training with GANs, e.g. VAE/GAN [22], AEGAN [23]. We focus on the latter since it usually gives better reconstruction quality and its training is more stable [36].…”
Section: Related Workmentioning
confidence: 99%