The fabric defect models based on deep learning often demand numerous training samples to achieve high accuracy. However, obtaining a complete dataset containing all possible fabric textures and defects is a big challenge due to the sophisticated and various fabric textures and defect forms. This study created a two-stage deep pix2pixGAN network called Dual Deep pix2pixGAN Network (DPGAN) to address the above problem. The defect generation model was trained based on the DPGAN network to automatically “transfer” defects from defected fabric images to clean, defect-free fabric images, thus strengthening the training data. To evaluate the effectiveness of the defect generation model, extensive comparative experiments were conducted to assess the performance of the fabric defect detection before and after data enhancement. The results indicate that the detection accuracy was improved regarding the belt_yarn, hole, and stain defect.
As an important branch in the field of image fusion, the multi-focus image fusion technique can effectively solve the problem of optical lens depth of field, making two or more partially focused images fuse into a fully focused image. In this paper, the methods based on boundary segmentation was put forward as a group of image fusion method. Thus, a novel classification method of image fusion algorithms is proposed: transform domain methods, boundary segmentation methods, deep learning methods, and combination fusion methods. In addition, the subjective and objective evaluation standards are listed, and eight common objective evaluation indicators are described in detail. On the basis of lots of literature, this paper compares and summarizes various representative methods. At the end of this paper, some main limitations in current research are discussed, and the future development of multi-focus image fusion is prospected.
In the microscopic imaging scenario where the object thickness exceeds the depth of field of the microscope, multi-focus image fusion (MFF) is an effective method to generate an all-in-focus image. However, for nonwoven fabric for which the captured image number is up to 100 or more, the existing methods often underperform in areas near the fiber edges, owing to image ghosting and noise accumulation caused by the platform moving. To address the above problem, this paper presents a method designed to fuse multi-layer micro-images based on the combination of spectral and spatial features of the images. Firstly, the spectral domain-based map is generated by decomposition and reconstruction of the high-frequency and low-frequency components of the images, aimed at obtaining the edge information. Simultaneously, the spatial domain-based fuse map is built through sharpness measurement, referring to visual perception. Finally, the two methods are combined via an optimized weight to obtain an all-in-focus fused image. Four groups of real-world data consisting of 100 multi-focus nonwoven images are utilized to verify the superiority of this method. The experimental results demonstrate that the proposed method can obtain satisfactory performance in terms of both human visual evaluation and objective evaluation compared with the image fusion framework based on the convolutional neural network, MFF, region-based image fusion algorithm and convolutional neural network state-of-the-art fusion methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.