Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413514
|View full text |Cite
|
Sign up to set email alerts
|

Down to the Last Detail

Abstract: Virtual try-on has attracted lots of research attention due to its potential applications in e-commerce, virtual reality and fashion design. However, existing methods can hardly preserve the finegrained details (e.g., clothing texture, facial identity, hair style, skin tone) during generation, due to the non-rigid body deformation and multi-scale details. In this work, we propose a multi-stage framework to synthesize person images, where fine-grained details can be well preserved. To address the long-range tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…LITERATURE REVIEW Sangho Lee, Seoyoung Lee and Joonseok Lee (2022) [2] their research introduces the Clothes Fitting Module (CFM) [1] within the three-step Details-Preserving Virtual Try-On (DP-VTON) [1] model to effectively disentangle the characteristics of the person and source clothes. By employing VGG loss (LVGG) and L1 loss (L1) for training, DP-VTON significantly enhances the quality of virtual try-on experiences, outperforming the most up-to-date methods like CP-VTON+ [13], ACGPN [19], and PFAFN. DP-VTON [1] Expresses the target clothing's detailed characteristics accurately and faithfully, seamlessly fits various pose and body shape, addressing limitations observed in other approaches and advancing the field of virtual try-on technology CP-VTON [3] Wang et al presented Trademark Safeguarding Picture put together Virtual Attempt With respect to Organize (CPVTON) that can accomplish a persuading take a stab at picture unions.…”
Section: Vmentioning
confidence: 99%
“…LITERATURE REVIEW Sangho Lee, Seoyoung Lee and Joonseok Lee (2022) [2] their research introduces the Clothes Fitting Module (CFM) [1] within the three-step Details-Preserving Virtual Try-On (DP-VTON) [1] model to effectively disentangle the characteristics of the person and source clothes. By employing VGG loss (LVGG) and L1 loss (L1) for training, DP-VTON significantly enhances the quality of virtual try-on experiences, outperforming the most up-to-date methods like CP-VTON+ [13], ACGPN [19], and PFAFN. DP-VTON [1] Expresses the target clothing's detailed characteristics accurately and faithfully, seamlessly fits various pose and body shape, addressing limitations observed in other approaches and advancing the field of virtual try-on technology CP-VTON [3] Wang et al presented Trademark Safeguarding Picture put together Virtual Attempt With respect to Organize (CPVTON) that can accomplish a persuading take a stab at picture unions.…”
Section: Vmentioning
confidence: 99%
“…Most virtual try-on models have three major modules: segmentation, warping, and try-on synthesis [41], [49], [182]. The segmentation module is responsible for generating a semantic layout that aligns with the desired target pose.…”
Section: Multi-pose Virtual Try-onmentioning
confidence: 99%
“…This method gathers volunteers and asks them to choose the best images generated from different virtual try-on models based on specific criteria such as photorealism and accuracy. ACGPN [195], WUTON [85], PF-AFN [58], StylePoseGAN [159], HR-VTON [111], DCI-VTON [63], MG-VTON [41], FashionOn [77], TB-VTON [182], FW-GAN [42], MV-TON [207] and video attention-based method [176] all use user studies to demonstrate their superiority over their predecessors.…”
Section: Qualitative Evaluationmentioning
confidence: 99%
“…The other they designed is a new U-Transformer, which is largely effective for generating largely-realistic images in a try-on emulsion. In [2], they investigated the virtual try-on under arbitrary acts has attracted lots of hunt attention due to its huge eventuality operations. However, being styles can hardly save the details in clothing texture and facial identity( face, hair) while fitting new clothes and acts onto a person.…”
Section: Literature Surveymentioning
confidence: 99%