Based on the generative adversarial network, this paper proposes a method of colorizing gray images without supervision and applies it to the field of the automatic coloring of gray comic sketches to solve the problem of the high production cost of color comics and the large consumption of human resources. The method proposed in this paper mainly improves the classical generative adversarial network model in the following aspects: first, the idea of the residual network is added to the builder model and the discriminator model to ensure that the training of the model can develop in the right direction; secondly, add a gradient penalty term to the loss function of the discriminator to accelerate the convergence of the model and speed up the iteration; at the same time, the activation function in the original model is changed to the Mish activation function, so that the information flowing through the network model has higher accuracy and better generalization. This paper trained the model of the Anime Sketch Colorization Pair dataset [1] from Kaggle, and the final experimental results show that the method is practical and feasible.
In this paper, we propose a similar image retrieval algorithm based on feature fusion and locality-sensitive hash to address the problems of inadequate representation of image content by individual features and long retrieval time for massive image data. The fusion of global features and attention features makes the image features have both color structure and semantic information, which can better characterize the image content. In the image retrieval stage, the locality-sensitive hash is used to hash encode the image features, the cosine similarity is used as the similarity measure, and finally, the index is built to improve the retrieval efficiency. The similar image retrieval algorithm proposed in this paper has improved the average finding accuracy and recall rate on Caltech 256 and Corel5k datasets compared with other methods, and the retrieval time is greatly reduced.
Various techniques were combined to optimize an optical inspection system designed to automatically inspect defects in manufactured paper bowls. A self-assembled system was utilized to capture images of defects on the bowls. The system employed an image sensor with a multi-pixel array that combined a complementary metal-oxide semiconductor and a photo detector. A combined ring light served as the light source, while an infrared (IR) LED matrix panel was used to provide constant IR light to highlight the outer edges of the objects being inspected. The techniques employed in this study to enhance defect inspections on produced paper bowls included Gaussian filtering, Sobel operators, binarization, and connected components. Captured images were processed using these technologies. Once the non-contact inspection system’s machine vision method was completed, defects on the produced paper bowls were inspected using the system developed in this study. Three inspection methods were used in this study: internal inspection, external inspection, and bottom inspection. All three methods were able to inspect surface features of produced paper bowls, including dirt, burrs, holes, and uneven thickness. The results of our study showed that the average time required for machine vision inspections of each paper bowl was significantly less than the time required for manual inspection. Therefore, the investigated machine vision system is an efficient method for inspecting defects in fabricated paper bowls.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.