2022
DOI: 10.48550/arxiv.2204.12678
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Optimized latent-code selection for explainable conditional text-to-image GANs

Abstract: The task of text-to-image generation has achieved remarkable progress due to the advances in the conditional generative adversarial networks (GANs). However, existing conditional text-to-image GANs approaches mostly concentrate on improving both image quality and semantic relevance but ignore the explainability of the model which plays a vital role in real-world applications. In this paper, we present a variety of techniques to take a deep look into the latent space and semantic space of the conditional text-t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…Zhang et al [9] proposed DiverGAN inserting a dense layer into the pipeline to address the lack-of-diversity problem present in current single-stage text-to-image GAN models. Zhang et al [27] introduced linear-interpolation and triangularinterpolation techniques to explain the single-stage text-toimage GAN model. Moreover, a Good/Bad data set was created to select successfully generated images and corresponding good latent codes.…”
Section: A Text-to-image Generationmentioning
confidence: 99%
“…Zhang et al [9] proposed DiverGAN inserting a dense layer into the pipeline to address the lack-of-diversity problem present in current single-stage text-to-image GAN models. Zhang et al [27] introduced linear-interpolation and triangularinterpolation techniques to explain the single-stage text-toimage GAN model. Moreover, a Good/Bad data set was created to select successfully generated images and corresponding good latent codes.…”
Section: A Text-to-image Generationmentioning
confidence: 99%