Image translation tasks based on generative models have become an important research area, such as the general framework for unsupervised image translation-CycleGAN (Cycle-Consistent Generative Adversarial Networks). A typical advantage of CycleGAN is that it can realize the training of two image sets without pairing, but there are still some problems in the preservation of semantic information and the learning of specific features. In this paper, we propose the CycleGAN-AdaIN framework based on the CycleGAN model, which can translate real photos into Chinese ink paintings. In order to retain the content of the image completely, we use one cycle consistency loss to replace two in the structure of the model. To learn the style information of the ink painting, we introduce an AdaIN (Adaptive Instance Normalization) module before the decoding process of the generation network. In addition, to correct the details of the generated image, we add the MS-SSIM (Multi-Scale-Structural Similarity Index) loss in the reconstruction loss to generate a higher quality image. Compared with the existing methods in FID, Kernel MMD, PSNR and SSIM, the experiment results show that our method can accomplish the task of transferring real photos to ink paintings and get better performance than the baseline model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.