PurposeThe paper aims to transfer the item image of a given clothing product to a corresponding area of the user image. Existing classical methods suffer from unconstrained deformation of clothing and occlusion caused by hair or poses, which leads to loss of details in the try-on results. In this paper, the authors present a details-oriented virtual try-on network (DO-VTON), which allows synthesizing high-fidelity try-on images with preserved characteristics of target clothing.Design/methodology/approachThe proposed try-on network consists of three modules. The fashion parsing module (FPM) is designed to generate the parsing map of a reference person image. The geometric matching module (GMM) warps the input clothing and matches it with the torso area of the reference person guided by the parsing map. The try-on module (TOM) generates the final try-on image. In both FPM and TOM, attention mechanism is introduced to obtain sufficient features, which enhances the performance of characteristics preservation. In GMM, a two-stage coarse-to-fine training strategy with a grid regularization loss (GR loss) is employed to optimize the clothing warping.FindingsIn this paper, the authors propose a three-stage image-based virtual try-on network, DO-VTON, that aims to generate realistic try-on images with extensive characteristics preserved.Research limitations/implicationsThe authors’ proposed algorithm can provide a promising tool for image based virtual try-on.Practical implicationsThe authors’ proposed method is a technology for consumers to purchase favored clothes online and to reduce the return rate in e-commerce.Originality/valueTherefore, the authors’ proposed algorithm can provide a promising tool for image based virtual try-on.
Previous research on fabric drape has not provided an objective and comprehensive characterization of drape characteristics. In light of this, we proposed an approach that utilizes a neural network-based framework for characterizing the umbrella drape of woven fabrics. Fabric drapes with the same macro-level mechanical characteristics can be categorized together, thereby establishing objective classification criteria. Our method involved feature extraction and classification from drape images/point clouds via neural networks, namely ResNet18 and the deep graph convolutional neural network (DGCNN). We assessed the effectiveness of both networks through supervised learning and selected the best candidate to distinguish/retrieve drape styles from unlabeled data. Moreover, a sketch down-sampling (SDS) tailored to accurately represent point clouds of umbrella-shaped drapes was devised. In all, 5160 drape meshes were collected by RGB-D cameras and GeomagicTM. Two neural networks were trained for 30 epochs using stochastic gradient descent with a momentum of 0.9. The learning rate was set to 0.1 for ResNet18 and 0.001 for the DGCNN. Experimental results demonstrated that the DGCNN coupled with the SDS method was the optimal feature extraction solution for woven fabric drapes, given that the accuracy reached 97% with the coefficient of variation of 7%. Therefore, our approach offered an objective and precise quantification of fabric drape, which provided a possible downstream application for searching fabrics based on drape similarity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.