Along with social distancing, wearing masks is an effective method of preventing the transmission of COVID-19 in the ongoing pandemic. However, masks occlude a large number of facial features, preventing facial recognition. The recognition rate of existing methods may be significantly reduced by the presence of masks. In this paper, we propose a method to effectively solve the problem of the lack of facial feature information needed to perform facial recognition on people wearing masks. The proposed approach uses image super-resolution technology to perform image preprocessing along with a deep bilinear module to improve EfficientNet. It also combines feature enhancement with frequency domain broadening, fuses the spatial features and frequency domain features of the unoccluded areas of the face, and classifies the fused features. The features of the unoccluded area are increased to improve the accuracy of recognition of masked faces. The results of a cross-validation show that the proposed approach achieved an accuracy of 98% on the RMFRD dataset, as well as a higher recognition rate and faster speed than previous methods. In addition, we also performed an experimental evaluation in an actual facial recognition system and achieved an accuracy of 99%, which demonstrates the effectiveness and practicability of the proposed method.INDEX TERMS Face recognition with mask, convolutional neural network, frequency domain widening, bilinear module, RMFRD dataset.
Image inpainting techniques have been greatly improved by relying on structure and texture priors. However, damaged original images or rough predictions cannot provide sufficient texture information and accurate structural priors, leading to a drop in image quality. Moreover, from the perspective of human eye perception, it is important to pay attention to facial symmetry and facial attribute consistency. In this paper, we present a face inpainting system with iteration structure, which uses generative facial priors contained in pretrained GANs and predicted semantic information for guidance. Specifically, generative facial priors generated by the GAN inversion techniques introduce sufficient textures and features to assist inpainting; semantic maps are able to provide facial structural information and semantic categories of different pixels for face reconstruction. In particular, we iteratively refine images multiple times, with the semantic map updated in each iteration. The Weighted Prior-Guidance Modulation layer (WPGM) is devised for incorporating priors into networks through spatial modulation. We also propose facial feature self-symmetry loss to restrict the symmetry of faces in feature space. Experiments on CelebA-HQ and LaPa datasets demonstrate the superiority of our model for facial detail and attributes consistency. Meanwhile, under the background of COVID-19, it is worth trying recognition via inpainting to deal with recognition challenges brought by mask occlusion. Relevant experiments show that our inpainting model does help to recognition tasks to a certain degree, with higher accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.