2021
DOI: 10.1145/3450626.3459884
|View full text |Cite
|
Sign up to set email alerts
|

TryOnGAN

Abstract: Given a pair of images---target person and garment on another person---we automatically generate the target person in the given garment. Previous methods mostly focused on texture transfer via paired data training, while overlooking body shape deformations, skin color, and seamless blending of garment with the person. This work focuses on those three components, while also not requiring paired data training. We designed a pose conditioned StyleGAN2 architecture with a clothing segmentation branch that is train… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(15 citation statements)
references
References 28 publications
0
15
0
Order By: Relevance
“…Figure . 6 shows the result of image editing by distilling five interpretable directions from the eigenvectors of 𝑨 𝑇𝑜𝑅𝐺𝐵 ⊺ 𝑨 𝑻𝒐𝑹𝑮𝑩 where 𝛼 𝑖 (𝑖 ∈ [1,5]) means the intensity of each optimal directions 𝒏 𝒊 * in Equation. (12). These results demonstrate that we can edit the image color by manually adapting 𝛼 𝑖 while maintaining the structure and the texture of the generated clothing images thanks to the second module of our system.…”
Section: Quality Of Sefa Color Editingmentioning
confidence: 56%
See 1 more Smart Citation
“…Figure . 6 shows the result of image editing by distilling five interpretable directions from the eigenvectors of 𝑨 𝑇𝑜𝑅𝐺𝐵 ⊺ 𝑨 𝑻𝒐𝑹𝑮𝑩 where 𝛼 𝑖 (𝑖 ∈ [1,5]) means the intensity of each optimal directions 𝒏 𝒊 * in Equation. (12). These results demonstrate that we can edit the image color by manually adapting 𝛼 𝑖 while maintaining the structure and the texture of the generated clothing images thanks to the second module of our system.…”
Section: Quality Of Sefa Color Editingmentioning
confidence: 56%
“…One of the most promising methods is StyleGAN [9], which improves PGGAN [10] by using Adaptive Instance Normalization (AdaIN) [11] to generate realistic high-resolution images. StyleGAN image generation tasks can be applied to fashion image generation techniques such as style transfer-based virtual try-on [12] and fashion outfit generation [13]. StyleGAN2 [14] improves the original StyleGAN's training stability by refining AdaIN and introducing a lazy path length regularization method.…”
Section: Application Of Generative Adversarial Network In Fashionmentioning
confidence: 99%
“…They then use the two networks to train a segmentation mask generation network in an unsupervised manner. This notion, of extracting a segmentation map with the help of StyleGAN's structure, has been employed similarly by others [ZLG*21,LY K *21,LKL*21,LVKS21].…”
Section: Discriminative Applicationsmentioning
confidence: 99%
“…While all the aforementioned works showed incredible results and promise in real‐world scenarios, they are limited in the domains they operate over. Some works have explored going beyond the facial domain and have explored applying StyleGAN for full‐body synthesis in various applications such as virtual try‐on and portrait reposing [LVKS21,ALY*21].…”
Section: Encoding and Inversionmentioning
confidence: 99%
“…Technically, there are two general approaches for controlling the images’ generation features under continuous conditions. One approach achieves attribute control by identifying operable paths in the latent space, , for instance, the separation hyperplane of a binary attribute, since the normal line of the hyperplane indicates a unique direction that it can be used to control the classification attribute. However, it can only provide relative control (e.g., turning the face older or rotating the face toward the left) without providing explicit control.…”
Section: Introductionmentioning
confidence: 99%