The world today is being hit by COVID-19. As opposed to fingerprints and ID cards, facial recognition technology can effectively prevent the spread of viruses in public places because it does not require contact with specific sensors. However, people also need to wear masks when entering public places, and masks will greatly affect the accuracy of facial recognition. Accurately performing facial recognition while people wear masks is a great challenge. In order to solve the problem of low facial recognition accuracy with mask wearers during the COVID-19 epidemic, we propose a masked-face recognition algorithm based on large margin cosine loss (MFCosface). Due to insufficient masked-face data for training, we designed a masked-face image generation algorithm based on the detection of the detection of key facial features. The face is detected and aligned through a multi-task cascaded convolutional network; and then we detect the key features of the face and select the mask template for coverage according to the positional information of the key features. Finally, we generate the corresponding masked-face image. Through analysis of the masked-face images, we found that triplet loss is not applicable to our datasets, because the results of online triplet selection contain fewer mask changes, making it difficult for the model to learn the relationship between mask occlusion and feature mapping. We use a large margin cosine loss as the loss function for training, which can map all the feature samples in a feature space with a smaller intra-class distance and a larger inter-class distance. In order to make the model pay more attention to the area that is not covered by the mask, we designed an Att-inception module that combines the Inception-Resnet module and the convolutional block attention module, which increases the weight of any unoccluded area in the feature map, thereby enlarging the unoccluded area’s contribution to the identification process. Experiments on several masked-face datasets have proved that our algorithm greatly improves the accuracy of masked-face recognition, and can accurately perform facial recognition with masked subjects.
This research discussed a deep learning method based on an improved generative adversarial network to segment the hippocampus. Different convolutional configurations were proposed to capture information obtained by a segmentation network. In addition, a generative adversarial network based on Pixel2Pixel was proposed. The generator was a codec structure combining a residual network and an attention mechanism to capture detailed information. The discriminator used a convolutional neural network to discriminate the segmentation results of the generated model and that of the expert. Through the continuously transmitted losses of the generator and discriminator, the generator reached the optimal state of hippocampus segmentation. T1-weighted magnetic resonance imaging scans and related hippocampus labels of 130 healthy subjects from the Alzheimer's disease Neuroimaging Initiative dataset were used as training and test data; similarity coefficient, sensitivity, and positive predictive value were used as evaluation indicators. Results showed that the network model could achieve an efficient automatic segmentation of the hippocampus and thus has practical relevance for the correct diagnosis of diseases, such as Alzheimer's disease.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.