2017
DOI: 10.1007/978-3-319-70096-0_22
|View full text |Cite
|
Sign up to set email alerts
|

Learning Inverse Mapping by AutoEncoder Based Generative Adversarial Nets

Abstract: The inverse mapping of GANs'(Generative Adversarial Nets) generator has a great potential value. Hence, some works have been developed to construct the inverse function of generator by directly learning or adversarial learning. While the results are encouraging, the problem is highly challenging and the existing ways of training inverse models of GANs have many disadvantages, such as hard to train or poor performance. Due to these reasons, we propose a new approach based on using inverse generator (IG) model a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 61 publications
(45 citation statements)
references
References 11 publications
0
45
0
Order By: Relevance
“…In contrast to Luo et al [11], we demonstrate our inversion approach on data samples drawn from test sets of real data samples. To make inversion more challenging, in the case of the Omniglot dataset, we invert image samples that come from a different distribution to the training data.…”
Section: Relation To Previous Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast to Luo et al [11], we demonstrate our inversion approach on data samples drawn from test sets of real data samples. To make inversion more challenging, in the case of the Omniglot dataset, we invert image samples that come from a different distribution to the training data.…”
Section: Relation To Previous Workmentioning
confidence: 99%
“…On the other hand, Luo et al 3 [11], train an encoding network after the GAN has been trained, which means that their approach may be applied to pre-trained models. One concern about the approach of Luo et al [11], is that it may not be an accurate reflection of what the GAN has learned, since the learned decoder may over-fit to the examples it is trained on. For this reason, the approach of Luo et al [11] may not be suitable for inverting image samples that come from a different distribution to the training data.…”
Section: Relation To Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare reconstructions against inference-capable models: AGE [31] and ALI [6]. We also train ProgGAN for reconstruction as follows (compare to e.g., [24,15,17,4]). We train the network normally until convergence, and then use the latent vector of the discriminator also as the latent input for the generator (properly normalized).…”
Section: Celeba and Celeba-hqmentioning
confidence: 99%
“…Network-type-based: in addition, several GAN variants have been named after the network topology used in the GAN configuration, such as the DCGAN based on deep convolutional neural networks [19], the AEGAN based on autoencoders [71], the C-RNN-GAN based on continuous recurrent neural networks [72], the AttnGAN based on attention mechanisms [73], and the CapsuleGAN based on capsule networks [74].…”
Section: B Category Of Adversarial Networkmentioning
confidence: 99%