We propose an anime style transfer model to generate anime faces from human face images. We improve the model by modifying the normalization function to obtain more feature information. To make the face feature position of the anime face similar to the human face, we propose facial landmark loss to calculate the error between the generated image and the real human face image. To avoid obvious color deviation in the generated images, we introduced perceptual color loss into the loss function. In addition, due to the lack of reasonable metrics to evaluate the quality of the animated images, we propose the use of Fréchet anime inception distance to calculate the distance between the distribution of the generated animated images and the real animated images in high-dimensional space, so as to understand the quality of the generated animated images. In the user survey, up to 74.46% of users think that the image produced by the proposed method is the best compared with other models. Also, the proposed method reaches a score of 126.05 for Fréchet anime inception distance. Our model performs the best in both user studies and FAID, showing that we have achieved better performance in human visual perception and model distribution. According to the experimental results and user feedback, our proposed method can generate results with better quality compared to existing methods.