Due to the similarities between cashmere and wool, the automatic identification of these two animal fibers continues to be a huge challenge in textile society. In this paper, for the identification of micrographs of cashmere and wool, bag-of-words and spatial pyramid matching are used. Each fiber image was regarded as a collection of feature vectors in our logic. The vectors, extracted from the original dataset, were fed into a support vector machine for supervised classification. The codebook size and the resolution level were completely investigated. The experimental results indicated that the image segmentation delivered a positive contribution in enhancing the accuracy of classification. The overall performance of the model was robust under various blend ratios. It verifies that the bag-of-words with spatial pyramid match is an effective approach to the identification of cashmere and wool fibers.
In this paper, we present a novel non-parametric method for precisely reconstructing a three dimensional (3D) virtual mannequin from anthropometric measurements and mask image(s) based on Graph Convolution Network (GCN). The proposed method avoids heavy dependence on a particular parametric body model such as SMPL or SCPAE and can predict mesh vertices directly, which is significantly more comfortable using a GCN than a typical Convolutional Neural Network (CNN). To further improve the accuracy of the reconstruction and make the reconstruction more controllable, we incorporate the anthropometric measurements into the developed GCN. Our non-parametric reconstruction results distinctly outperform the previous graph convolution method, both visually and in terms of anthropometric accuracy. We also demonstrate that the proposed network possesses the capability to reconstruct a plausible 3D mannequin from a single-view mask. The proposed method can be effortless extended to a parametric method by appending a Multilayer Perception (MLP) to regress the parametric space of the Principal Component Analysis (PCA) model to achieve 3D reconstruction as well. Extensive experimental results demonstrate that our anthropometric GCN itself is very useful in improving the reconstruction accuracy, and the proposed method is effective and robust for 3D mannequin reconstruction. INDEX TERMS Graph convolution network, non-parametric mannequin reconstruction, anthropometric mannequin design, parametric reconstruction.
The 3D virtual mannequin has been widely used in apparel industry, and its importance is also increasing. This work develops a new 3D virtual mannequin reconstruction system based on optimization. All the mannequins reconstructed by the proposed approach share the identical topology, that is, there is a point-to-point correspondence among the mannequins, which will significantly facilitate much subsequent processing in fashion design, made-to-measure, and virtual try-on. The inputs to the proposed system contain a template human body, a raw scan (represented in mesh), and a very sparse corresponding landmarks set. The proposed approach substantially utilizes the optimization technology to drive the template to deform into a real scan. There is no special requirement on the raw meshes. The raw meshes may have a different number of vertices and triangles or may even be incomplete. The proposed method only needs 21 landmarks as hard-constraints to reconstruct a mannequin with tens of thousands of vertices. These landmarks can be extracted automatically for standard mannequin reconstruction. Besides the standard mannequin, the proposed system can also be used to reconstruct display mannequins, that is, mannequins with various poses. The experiments visualize the optimization procedure and verify that the optimization is efficient and effective. Quantitative analysis also proves that the reconstruction error satisfies the requirements of fashion design and tailoring.
To evaluate the ability of woven fabrics to drape in a more accurate way, a three-dimensional point cloud of a draped woven fabric was captured via an in-house drape-scanner. A new indicator, total drape angle (TDA), was proposed based on the three-dimensional fabric drape to characterize the ability of a woven fabric to drape. The relationship between TDA and the drape coefficient (DC) was analyzed to validate the performance of TDA. The result indicated that TDA is more stable and representative than the traditional DC in characterizing the ability of a woven fabric to drape. In addition, the drape angle distribution function (DADF) of the triangular mesh was employed to describe fabric drape, as well as to bridge the gap between drape configuration and the warp bending rigidity of woven fabric. The results showed that the correlation coefficient between the real warp bending rigidity value and what was predicted warp based on DADF and fabric weight was 0.952.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.