2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00317
|View full text |Cite
|
Sign up to set email alerts
|

Facial Attribute Transformers for Precise and Robust Makeup Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…In recent years, there have been many approaches based on generative adversarial networks. [1][2][3][4][5] BeautyGAN 1 was one of the first methods to use GAN for makeup transfer, which uses a pixel-level histogram loss to achieve instance-level makeup transfer. PSGAN 2 utilizes the face analysis map and facial coordinate points to construct the pixel-level correspondence between the source image and the reference image, so as to solve the problem of face misalignment between different head poses and facial expressions.…”
Section: Makeup Transfermentioning
confidence: 99%
See 2 more Smart Citations
“…In recent years, there have been many approaches based on generative adversarial networks. [1][2][3][4][5] BeautyGAN 1 was one of the first methods to use GAN for makeup transfer, which uses a pixel-level histogram loss to achieve instance-level makeup transfer. PSGAN 2 utilizes the face analysis map and facial coordinate points to construct the pixel-level correspondence between the source image and the reference image, so as to solve the problem of face misalignment between different head poses and facial expressions.…”
Section: Makeup Transfermentioning
confidence: 99%
“…In recent years, there have been many approaches based on generative adversarial networks 1‐5 . BeautyGAN 1 was one of the first methods to use GAN for makeup transfer, which uses a pixel‐level histogram loss to achieve instance‐level makeup transfer.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, MPViT [66] explores multi-scale patch embedding and multi-path structure that enable both fine and coarse feature representations simultaneously. Benefiting from advances in basic vision transformer models, many task-specific models are proposed and achieve significant progress in downstream vision tasks, e.g., object detection [13,134,43,22], semantic segmentation [130,17,98,116,126,29,28], generative adversarial network [59,102,58], low-level vision [16,72,127], video understanding [81,7,94,104,118], self-supervised learning [2,24,80,53,4,14,38,23,117,110,3,45], neural architecture search [103,67,15,19,20], etc. Inspired by practical improvements in EA variants, this work migrates them to Transformer improvement and designs a powerful visual model with higher precision and efficiency than contemporary works.…”
Section: Vision Transformersmentioning
confidence: 99%
“…Therefore, research of FBP is scientifically important for understanding the perception mechanism of human brain and simulating human intelligence. Simultaneously, exploring how to better interpret, quantify and predict beauty will help people understand and describe beauty more scientifically and objectively, further promoting the rapid development of related industries, such as makeup evaluation (Wei et al 2022), makeup transfer (Wan et al 2022), personalization recommendation (Lin et al 2019a, b) and cosmetic surgery planning (Xie et al 2015). In recent years, scholars have been working hard to explore deep learning and use it for FBP.…”
Section: Introductionmentioning
confidence: 99%