With the improvement of people's living standards, people have higher clothing safety and artistry requirements. Plant dyeing artworks use plant sap to dye natural and woven fabrics, thus creating a print close to nature, gorgeous in style, and free from the harmful components of chemical dyeing, which is recognized and widely used by highly knowledgeable people. However, the type and the design of plant dyeing artworks need to incorporate the artist's creative inspiration, which is limited by human work efficiency. It constrains the creation of plant-dyeing artworks and limits the speed of innovation. Deep learning has made significant progress in the field of artistic creation. In this paper, we propose to combine the Disco Diffusion model with the creation of plant-dyeing artworks. The high-quality image generation capability of the Diffusion model and multi-modal content description is utilized to design and generate plant dyeing artworks. The original style is firstly modeled after photographs of the original plant dyeing artwork. The style type is further expanded using text descriptions. The Disco Diffusion model is then used to create the final plant-dyeing piece of art. The experiment results show that generating plant dyeing paintings based on the Disco Diffusion model achieves better visual results. Through the controlled generation process and high-quality work generation results, the stylized plant dyeing design works is obtained automatically. It provides strong support for the creation of plant dyeing art.
With the improvement of image processing tools and the flexibility of digital image editing, automatic image inpainting has found important applications in computer vision and has also become an important and challenging topic of research in image processing. Through the analysis of the exemplar-based image inpainting method, we found the calculation of priority of image patches to fill is beneficial to the textures than the geometric structure, while the geometric structure is more important to the filling than the textures, resulting in the problem of texture extension or the discontinuity of the geometric structure in the image. The image decomposition model is applied to decompose the image to inpaint into two components: texture and cartoon, which are then filled respectively. We improve the calculation of the priority of the texture component and the cartoon component, which make it suitable to the image character. In order to solve the mismatch problem, we propose a measurement based on global structure information of the target patch and the candidate patch in this paper. The mean and variance of the pixels in the image patch representing the global structure of image patches added to the measurement formula can improve the accuracy of the best matching patch to fill the missing region. We compared our model with the Criminisi algorithm and its improved one, through applying the algorithm to the synthetic images and the natural image. We also compute the PNSF and SSIM values between the original images and the inpainted images to verify the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm can get more perceptually aware inpainted results than the Criminisi algorithm and its improved algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.