2022
DOI: 10.3390/electronics11111737
|View full text |Cite
|
Sign up to set email alerts
|

aRTIC GAN: A Recursive Text-Image-Conditioned GAN

Abstract: Generative Adversarial Networks have recently demonstrated the capability to synthesize photo-realistic real-world images. However, they still struggle to offer high controllability of the output image, even if several constraints are provided as input. In this work, we present a Recursive Text-Image-Conditioned GAN (aRTIC GAN), a novel approach for multi-conditional image generation under concurrent spatial and text constraints. It employs few line drawings and short descriptions to provide informative yet hu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 62 publications
0
2
0
Order By: Relevance
“…Also, MRP-GAN (Qi, Fan, et al, 2021), SAM-GAN (Peng et al, 2021), DM-GAN (M. Zhu, Pan, et al, 2019), DAE-GAN (Ruan et al, 2021), KT-GAN (Tan et al, 2021), Bridge-GAN (M. Yuan & Peng, 2020), CF-GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM-GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z.…”
Section: Text-to-image Translationmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, MRP-GAN (Qi, Fan, et al, 2021), SAM-GAN (Peng et al, 2021), DM-GAN (M. Zhu, Pan, et al, 2019), DAE-GAN (Ruan et al, 2021), KT-GAN (Tan et al, 2021), Bridge-GAN (M. Yuan & Peng, 2020), CF-GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM-GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z.…”
Section: Text-to-image Translationmentioning
confidence: 99%
“…This GAN model takes both image and a text that describes an object to generate a new image containing this object. Also, MRP‐GAN (Qi, Fan, et al, 2021), SAM‐GAN (Peng et al, 2021), DM‐GAN (M. Zhu, Pan, et al, 2019), DAE‐GAN (Ruan et al, 2021), KT‐GAN (Tan et al, 2021), Bridge‐GAN (M. Yuan & Peng, 2020), CF‐GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM‐GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z. Khan et al (2021), and Y. Zhou and Shimada (2021) proposed GAN models to synthesize face images based on the text describing these faces.…”
Section: Gan Applicationsmentioning
confidence: 99%