2022
DOI: 10.1016/j.knosys.2022.109750
|View full text |Cite
|
Sign up to set email alerts
|

CMAFGAN: A Cross-Modal Attention Fusion based Generative Adversarial Network for attribute word-to-face synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…CMAF block. Inspired by the methodological framework of CMAFGAN [ 25 ], for simple notation, we denoted and as x and y, respectively. As shown in Figure 3 , the CMAF block began with six 1 × 1 convolution layers applied to x and y.…”
Section: Methodsmentioning
confidence: 99%
“…CMAF block. Inspired by the methodological framework of CMAFGAN [ 25 ], for simple notation, we denoted and as x and y, respectively. As shown in Figure 3 , the CMAF block began with six 1 × 1 convolution layers applied to x and y.…”
Section: Methodsmentioning
confidence: 99%
“…Nevertheless, the discriminator aims to distinguish between the synthetic image and the matched natural image. The input conditions used by GAN-based image synthesis methods are various, such as sparse sketches [19][20][21] , gaussian noise 22,23 , text descriptions [24][25][26] , natural images 27,28 , and semantic layout [29][30][31][32] . Considering the great success of GANs in image synthesis, we propose a novel GAN-based approach to tackle image synthesis conditioned only on semantic layout.…”
Section: Introductionmentioning
confidence: 99%