PurposeExisting clothing parsing methods make little use of dataset-level information. This paper aims to propose a novel clothing parsing method which utilizes higher-level outfit combinatorial consistency knowledge from the whole clothing dataset to improve the accuracy of segmenting clothing images.Design/methodology/approachIn this paper, the authors propose an Outfit Memory Net (OMNet) that augments original feature by aggregating dataset-level prior clothing combination information. Specifically, the authors design an Outfit Matrix (OM) to represent clothing combination information of single image and an Outfit Memory Module (OMM) to store the clothing combination information of all images in the training set, i.e. dataset-level clothing combination information. In addition, the authors propose a Multi-scale Aggregation Module (MAM) to aggregate the clothing combination information in a multi-scale manner to solve the problem of large variance in the scale of objects in the clothing images.FindingsExperiments on Colorful Fashion Parsing Dataset (CFPD) dataset show that the authors' method achieves 93.15% pixel accuracy (PA) and 51.24% mean of class-wise intersection over union (mIoU), which are satisfactory parsing results compared with existing methods such as PSPNet, DANet and DeepLabV3. Moreover, through comparing the segmentation accuracy of different methods for each category, MAM could effectively improve the segmentation of small objects.Originality/valueWith the rise of various online shopping platforms and the continuous development of deep learning technology, emerging applications such as clothing recommendation, matching, classification and virtual try-on system have emerged in the clothing field. Clothing parsing is the key technology to realize these applications. Therefore, improving the accuracy of clothing parsing is necessary.
The spotlight image acquired by the Gaofen-3 satellite has a resolution of 1 m, which has great potential for 3-D localization. However, there have been no public reports on the 3-D localization accuracy evaluation of Gaofen-3 spotlight synthetic aperture radar (SAR) images. Here, three study areas were selected from this perspective, and the SAR spotlight stereo images of the study area were acquired using Gaofen-3. In the case of no ground control points (GCPs), based on the Rational Polynomial Coefficient (RPC) model, these images were used for initial 3-D localization; the plane accuracy was better than 10 m in general, and the elevation accuracy was worse than 37 m in general. Subsequently, the RPC model was optimized using geometric calibration technology, and the 3-D localization accuracy was assessed again. The elevation accuracy was significantly improved, which was generally better than 5 m. The plan accuracy was also improved, and it was generally better than 6 m. It can be seen that Gaofen-3 spotlight stereo images are of good quality, and high plane accuracy can be obtained even without GCPs. Geometric calibration technology improves the 3-D localization accuracy, and the elevation accuracy optimization effect is remarkable. Moreover, the optimization effect of plane accuracy is affected by the properties of stereo-image pairs. The optimization effect of plane accuracy is obvious for asymmetric stereo-image pairs, and the optimization effect of plane accuracy is general for symmetric stereo-image pairs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.