Color-patterned fabrics possess changeable patterns, low probability of defective samples, and various forms of defects. Therefore, the unsupervised inspection of color-patterned fabrics has gradually become a research hotspot in the field of fabric defect detection. However, due to the redundant information of skip connections in the network and the limitation of post-processing, the current reconstruction-based unsupervised fabric defect detection methods have difficulty in detecting some defects of color-patterned fabrics. In this article, we propose an Attention-Gate-based U-shaped Reconstruction Network (AGUR-Net) and a dual-threshold segmentation post-processing method. AGUR-Net consists of an encoder, an Atrous Spatial Pyramid Pooling module and an attention gate weighted fusion residual decoder. The encoder is used to obtain more representative features of the input image via EfficientNet-B2. The Atrous Spatial Pyramid Pooling module is used to enlarge the receptive field of the network and introduce multi-scale information into the decoder. The attention-gate-weighted residual fusion decoder is used to fuse the features of the encoder with the features of the decoder to obtain the reconstructed image. The dual-threshold segmentation post-processing is used to obtain the final defect detection results. Our method achieves a precision of 59.38%, a recall of 59.1%, an F1 of 54.31%, and an intersection-over-union ratio of 41.18% on the public dataset YDFID-1. The experimental results show that the proposed method can better detect and locate the defects of color-patterned fabrics compared with several other state-of-the-art unsupervised fabric defect detection methods.
The detection and location of yarn-dyed fabric defects is a crucial and challenging problem in actual production scenarios. Recently, unsupervised fabric defect detection methods based on convolutional neural networks have attracted more attention. However, the convolutional neural networks often neglect to model the global receptive field of images, which further influence the defect detection ability of the model. In this article, we propose a U-shaped Swin Transformer network based on Quadtree attention framework for unsupervised yarn-dyed fabric defect detection. The method via U-shaped network based on Swin Transformer, the Swin Transformer adopts local attention to effectively learn features, and the U-shaped network realizes pixel-level reconstruction of images. The Quadtree attention is used to effectively capture the global features of the image, and model the global receptive field, and then better reconstruct the yarn-dyed fabric image. The improved Euclidean residual enhances the detection ability of unobvious defects, and obtains the final defect detection results. The proposed method effectively avoids the difficulty of collecting a large number of defective samples and manual labeling. Our method obtains 51.34% F1 and 38.30% intersection over union on the YDFID-1 dataset. Experimental results show that the proposed method can achieve higher accuracy of fabric defect detection and location compared with other methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.