A new approach for extracting the hairiness from fabric based on the predicted fabric surface plane is presented in this paper to extract the hairiness from the depth image. The depth from focus (DFF) technique is utilized in this study to establish the depth image of the pilled fabrics by using a series of image layers captured under a microscope. A pilled fabric depth image provides information on the hairiness and the fabric surface, and the hairiness is located above the fabric surface. However, the depth value of the fabric surface covered with hairiness cannot be directly obtained. Therefore, for hairiness extraction, a predicted plane of the fabric surface is fitted by selecting several base points on the fabric surface. The target above the predicted plane will be considered as hairiness and will be extracted. The oversegmentation method based on the mean shift algorithm is used in the study to select the base points of the fabric surface. First, several seed points are marked along the Sobel edges; then, several oversegmented areas are formed after the growth of the seed points, which are called split pieces in this paper. The split pieces of the fabric surfaces are selected as the base points according to the depth value as well as the spatial direction of each split piece. Finally, the predicted plane of the fabric surface is established using these base points. The results of significance testing show that is it reasonable to assume that the fabric surface can be expressed as a plane. The results of the residual examination show that the predicted plane can correctly calculate the depth value (z) of the fabric surface at any plane position (x, y). The extracted hairiness images show that hairiness can be correctly and completely obtained through the predicted plane.
A fibrous filtering material is a kind of fiber assembly whose structure exhibits a three-dimensional (3D) network with dense microscopic open channels. The geometrical/morphological attributes, such as orientations, curvatures and compactness, of fibers in the network is the key to the filtration performance of the material. However, most of the previous studies were based on materials' 2D micro-images, which were unable to accurately measure these important 3D features of a filter's structure. In this paper, we present an imaging method to reconstruct the 3D structure of a fibrous filter from its optical microscopic images. Firstly, a series of images of the fiber assembly were captured at different depth layers as the stage moved vertically. Then a fusion image was established by extracting fiber edges from each layered image. Thirdly, the 3D coordinates of the fiber edges were determined using the sharpness/clarity of each edge pixel in the layered images. Finally, the 3D structure the fiber system was reconstructed through distance transformation based on the locations of fiber edges.
The fabric defect models based on deep learning often demand numerous training samples to achieve high accuracy. However, obtaining a complete dataset containing all possible fabric textures and defects is a big challenge due to the sophisticated and various fabric textures and defect forms. This study created a two-stage deep pix2pixGAN network called Dual Deep pix2pixGAN Network (DPGAN) to address the above problem. The defect generation model was trained based on the DPGAN network to automatically “transfer” defects from defected fabric images to clean, defect-free fabric images, thus strengthening the training data. To evaluate the effectiveness of the defect generation model, extensive comparative experiments were conducted to assess the performance of the fabric defect detection before and after data enhancement. The results indicate that the detection accuracy was improved regarding the belt_yarn, hole, and stain defect.
As an important branch in the field of image fusion, the multi-focus image fusion technique can effectively solve the problem of optical lens depth of field, making two or more partially focused images fuse into a fully focused image. In this paper, the methods based on boundary segmentation was put forward as a group of image fusion method. Thus, a novel classification method of image fusion algorithms is proposed: transform domain methods, boundary segmentation methods, deep learning methods, and combination fusion methods. In addition, the subjective and objective evaluation standards are listed, and eight common objective evaluation indicators are described in detail. On the basis of lots of literature, this paper compares and summarizes various representative methods. At the end of this paper, some main limitations in current research are discussed, and the future development of multi-focus image fusion is prospected.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.