We analyze the usage of Speeded Up Robust Features (SURF) as local descriptors for face recognition. The effect of different feature extraction and viewpoint consistency constrained matching approaches are analyzed. Furthermore, a RANSAC based outlier removal for system combination is proposed. The proposed approach allows to match faces under partial occlusions, and even if they are not perfectly aligned or illuminated.Current approaches are sensitive to registration errors and usually rely on a very good initial alignment and illumination of the faces to be recognized.A grid-based and dense extraction of local features in combination with a block-based matching accounting for different viewpoint constraints is proposed, as interest-point based feature extraction approaches for face recognition often fail.The proposed SURF descriptors are compared to SIFT descriptors. Experimental results on the AR-Face and CMU-PIE database using manually aligned faces, unaligned faces, and partially occluded faces show that the proposed approach is robust and can outperform current generic approaches.
Most current state-of-the-art methods for unconstrained face recognition use deep convolutional neural networks. Recently, it has been proposed to augment the typically used softmax cross-entropy loss by adding a center loss trying to minimize the distance between the face images and their class centers. In this work we further extend the center (intra-class) loss with an inter-class loss reminiscent of the popular early face recognition approach Fisherfaces. To this end we add a term that directly optimizes the distances of the class centers appearing in a batch in dependence of the input images. We evaluate the new loss on two popular databases for unconstrained face recognition, the Labeled Faces in the Wild and the Youtube Faces database. In both cases the new loss achieves competitive results.
Abstract. In this paper, we propose a novel algorithm for general 2D image matching, which is known to be an NP-complete optimization problem. With our algorithm, the complexity is handled by sequentially optimizing the image columns from left to right in a two-level dynamic programming procedure. On a local level, a set of hypotheses is computed for each column, while on a global level the best sequence of these hypotheses is selected. The optimization on the local level is guided by a lookahead that gives an estimate about the not yet optimized part of the image. We evaluate the algorithm on the task of pose-invariant face recognition in an automatic setup and show that the suggested method is competitive and achieves very good recognition accuracies on the popular face recognition databases CMU-PIE and CMU-MultiPIE.
The task of fine-grained visual classification (FGVC) deals with classification problems that display a small interclass variance such as distinguishing between different bird species or car models. State-of-the-art approaches typically tackle this problem by integrating an elaborate attention mechanism or (part-) localization method into a standard convolutional neural network (CNN). Also in this work the aim is to enhance the performance of a backbone CNN such as ResNet by including three efficient and lightweight components specifically designed for FGVC. This is achieved by using global k-max pooling, a discriminative embedding layer trained by optimizing class means and an efficient bounding box estimator that only needs class labels for training. The resulting model achieves new best stateof-the-art recognition accuracies on the Stanford cars and FGVC-Aircraft datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.