The detection and location of yarn-dyed fabric defects is a crucial and challenging problem in actual production scenarios. Recently, unsupervised fabric defect detection methods based on convolutional neural networks have attracted more attention. However, the convolutional neural networks often neglect to model the global receptive field of images, which further influence the defect detection ability of the model. In this article, we propose a U-shaped Swin Transformer network based on Quadtree attention framework for unsupervised yarn-dyed fabric defect detection. The method via U-shaped network based on Swin Transformer, the Swin Transformer adopts local attention to effectively learn features, and the U-shaped network realizes pixel-level reconstruction of images. The Quadtree attention is used to effectively capture the global features of the image, and model the global receptive field, and then better reconstruct the yarn-dyed fabric image. The improved Euclidean residual enhances the detection ability of unobvious defects, and obtains the final defect detection results. The proposed method effectively avoids the difficulty of collecting a large number of defective samples and manual labeling. Our method obtains 51.34% F1 and 38.30% intersection over union on the YDFID-1 dataset. Experimental results show that the proposed method can achieve higher accuracy of fabric defect detection and location compared with other methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.