Recent works based on deep learning and facial priors have performed well in superresolving severely degraded facial images. However, due to the limitation of illumination, pixels of the monitoring probe itself, focusing area, and human motion, the face image is usually blurred or even deformed. To address this problem, we properly propose Face Restoration Generative Adversarial Networks to improve the resolution and restore the details of the blurred face. They include the Head Pose Estimation Network, Postural Transformer Network, and Face Generative Adversarial Networks. In this paper, we employ the following: (i) Swish-B activation function that is used in Face Generative Adversarial Networks to accelerate the convergence speed of the cross-entropy cost function, (ii) a special prejudgment monitor that is added to improve the accuracy of the discriminant, and (iii) the modified Postural Transformer Network that is used with 3D face reconstruction network to correct faces at different expression pose angles. Our method improves the resolution of face image and performs well in image restoration. We demonstrate how our method can produce high-quality faces, and it is superior to the most advanced methods on the reconstruction task of blind faces for in-the-wild images; especially, our 8 × SR SSIM and PSNR are, respectively, 0.078 and 1.16 higher than FSRNet in AFLW.
Pose variant or self-occlusion is one of the open issues which severely degrades the performance of pose-invariant face recognition (PIFR). Existing solutions to PIFR either have undesirable generalization based on challenging pose normalization or are complicated for implement on account of deep neural network. To relieve the impact of ill-pose on PIFR, we have proposed Cross-Pose Generative Adversarial Networks(CP-GAN) to frontalize the profile face with unaltered identity by learning the mapping between the profile and frontal faces in image space. The generator is an encoder-decoder U-net, and generate frontal face image by fusing multiple profile images to achieve a better performance in PIFR. The siamese discriminative network attends to extract the deep representations of the generated frontal face and the ground truth without introducing extra networks in verification and recognition. Besides the implementable architecture, this problem is well alleviated by introducing a combination of adversarial loss for both the generator and the discriminator, symmetry loss, patch-wise loss, and identity loss guiding an identity reserving property of the generated frontal view. Quantitative and qualitative evaluation on both controlled and in-the-wild datasets attest that the solution we proposed to PIFR presents satisfactory perceptual results and transcends state-of-the-art methods on ill-pose face recognition.
In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.