2021
DOI: 10.1109/access.2021.3058674
|View full text |Cite
|
Sign up to set email alerts
|

DGattGAN: Cooperative Up-Sampling Based Dual Generator Attentional GAN on Text-to-Image Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Also, MRP-GAN (Qi, Fan, et al, 2021), SAM-GAN (Peng et al, 2021), DM-GAN (M. Zhu, Pan, et al, 2019), DAE-GAN (Ruan et al, 2021), KT-GAN (Tan et al, 2021), Bridge-GAN (M. Yuan & Peng, 2020), CF-GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM-GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z.…”
Section: Text-to-image Translationmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, MRP-GAN (Qi, Fan, et al, 2021), SAM-GAN (Peng et al, 2021), DM-GAN (M. Zhu, Pan, et al, 2019), DAE-GAN (Ruan et al, 2021), KT-GAN (Tan et al, 2021), Bridge-GAN (M. Yuan & Peng, 2020), CF-GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM-GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z.…”
Section: Text-to-image Translationmentioning
confidence: 99%
“…This GAN model takes both image and a text that describes an object to generate a new image containing this object. Also, MRP‐GAN (Qi, Fan, et al, 2021), SAM‐GAN (Peng et al, 2021), DM‐GAN (M. Zhu, Pan, et al, 2019), DAE‐GAN (Ruan et al, 2021), KT‐GAN (Tan et al, 2021), Bridge‐GAN (M. Yuan & Peng, 2020), CF‐GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM‐GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z. Khan et al (2021), and Y. Zhou and Shimada (2021) proposed GAN models to synthesize face images based on the text describing these faces.…”
Section: Gan Applicationsmentioning
confidence: 99%
“…In this approach, the generator incorporates a dynamic selection mechanism to match text features with image features, enabling more accurate synthesis. Meanwhile, the discriminator utilizes a multi-class discriminant method, where mask segmentation is introduced as an additional type to enhance its discrimination capacity 23 . The proposed framework, called RaSeedGAN (RAndomly-SEEDed super-resolution GAN), is designed to evaluate field quantities from randomly sparse sensors without relying on full-field high-resolution training.…”
Section: Related Workmentioning
confidence: 99%