2019
DOI: 10.48550/arxiv.1910.09399
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 51 publications
0
5
0
Order By: Relevance
“…Since the original GANs frameworks were initially built upon images, there is no doubt that the number of GANs applications in the image domain surpasses other areas such as text, voice and video. Multiple reviews [28][29][30][31][32] focus on image synthesis, even though the major proportion of the general GANs reviews mentioned above, are also in the image domain. Huang et al [29] categorize the image synthesis GANs frameworks into three types based on the overall architecture.…”
Section: Image Gansmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the original GANs frameworks were initially built upon images, there is no doubt that the number of GANs applications in the image domain surpasses other areas such as text, voice and video. Multiple reviews [28][29][30][31][32] focus on image synthesis, even though the major proportion of the general GANs reviews mentioned above, are also in the image domain. Huang et al [29] categorize the image synthesis GANs frameworks into three types based on the overall architecture.…”
Section: Image Gansmentioning
confidence: 99%
“…Similar applications are also reviewed elsewhere [28], with additional applications such as face ageing and 3D image synthesis. Moreover, Agnese et al [32] direct attention to GANs models that are conditioned on text and produce images. Some of these models fall under semantic enhancement GANs, where the main goal is to ensure that text is semantically coherent with the generated image.…”
Section: Image Gansmentioning
confidence: 99%
“…We focus on methods developed for Generative Adversarial Networks (GANs) (Goodfellow et al, 2014) that have demonstrated the most impressive results to date. The interested reader can find dedicated surveys on each conditional task (Agnese et al, 2019;Wu et al, 2017). The literature on conditional generative models (Mirza & Osindero, 2014) can be clustered depending on the type of conditional data.…”
Section: Conditional Generative Modelsmentioning
confidence: 99%
“…T2I has made significant progress in the last 5 years. Thus, several studies [8][9][10][11] have put forth a semantic taxonomy for adversarial text-to-image synthesis (T2I), summarizing the efforts made using GAN [12] mainly. In contrast, this paper focuses on two primary gaps in previous studies.…”
Section: Introductionmentioning
confidence: 99%
“…Viewing T2I as a vast domain, we comprehensively present a semantic taxonomy of Text-guided Visual-Output (T2Vo) contrary to either text-to-image synthesis [8] or T2I using GAN exclusively [9][10][11].…”
mentioning
confidence: 99%