Good registration (alignment to a reference) is essential for accurate face recognition. The effects of the number of landmarks on the mean localization error and the recognition performance are studied. Two landmarking methods are explored and compared for that purpose: (1) the Most Likely-Landmark Locator (MLLL), based on maximizing the likelihood ratio [2], and (2) Viola-Jones detection [14]. Both use the locations of facial features (eyes, nose, mouth, etc) as landmarks. Further, a landmark-correction method (BILBO) based on projection into a subspace is introduced. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5. The mean localization errors and effects on the verification performance have been measured. It was found that on the eyes, the Viola-Jones detector is about 1% of the interocular distance more accurate than the MLLL-BILBO combination. On the nose and mouth, the MLLL-BILBO combination is about 0.5% of the inter-ocular distance more accurate than the Viola-Jones detector. Using more landmarks will result in lower equal-error rates, even when the landmarking is not so accurate. If the same landmarks are used, the most accurate landmarking method will give the best verification performance.
Abstract-In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding landmarks in a face image and subsequent alignment based on these landmarks. To investigate the effect of image resolution we performed experiments where we varied the resolution. We investigate the effect of the resolution on the face recognition part, the registration part and the entire system. This research also confirms that accurate registration is of vital importance to the performance of the face recognition algorithm. The results of our face recognition system are optimal on face images with a resolution of 32 × 32 pixels.
Abstract-A probabilistic, maximum aposteriori approach to finding landmarks in a face image is proposed, which provides a theoretical framework for template based landmarkers. One such landmarker, based on a likelihood ratio detector, is discussed in detail. Special attention is paid to training and implementation issues, in order to minimize storage and processing requirements. In particular a fast approximate singular value decomposition method is proposed to speed up the training process and implementation of the landmarker in the Fourier domain is presented that will speed up the search process. A subspace method for outlier correction and an iterative implementation of the landmarker are both shown to improve its accuracy. The impact of carefully training the many parameters of the method is illustrated. The method is extensively tested and compared with alternatives.
Landmarking can be formalised as calculating the Maximum A-posteriori Probability (MAP) of a set of landmarks given an image (texture) containing a face. In this paper a likelihood-ratio based landmarking method is extended to a MAP-based landmarking method. The approach is validated by means of experiments. The MAP approach turns out to be advantageous, particularly for low quality images, in which case the landmarking accuracy improves significantly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.