Abstract-A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.
This paper proposes novel ways to deal with pose variations in a 2-D face recognition scenario. Using a training set of sparse face meshes, we built a Point Distribution Model and identified the parameters which are responsible for controlling the apparent changes in shape due to turning and nodding the head, namely the pose parameters. Based on them, we propose two approaches for pose correction: 1) a method in which the pose parameters from both meshes are set to typical values of frontal faces, and 2) a method in which one mesh adopts the pose parameters of the other one. Finally, we obtain pose corrected meshes and, taking advantage of facial symmetry, virtual views are synthesized via Thin Plate Splines-based warping. Given that the corrected images are not embedded into a constant reference frame, holistic methods are not suitable for feature extraction. Instead, the virtual faces are fed into a system that makes use of Gabor filtering for recognition. Unlike other approaches that warp faces onto a mean shape, we show that if only pose parameters are modified, client specific information remains in the warped image and discrimination between subjects is more reliable. Statistical analysis of the authentication results obtained on the XM2VTS database confirm the hypothesis. Also, the CMU PIE database is used to assess the performance of the proposed methods in an identification scenario where large pose variations are present, achieving state-of-the-art results and outperforming both research and commercial techniques.Index Terms-CMU PIE database, facial symmetry, Gabor jets, point distribution models, pose-invariant face recognition, thin-plate splines, XM2VTS database.
Abstract-Social video sites where people share their opinions and feelings are increasing in popularity. The face is known to reveal important aspects of human psychological traits, so the understanding of how facial expressions relate to personal constructs is a relevant problem in social media. We present a study of the connections between automatically extracted facial expressions of emotion and impressions of Big-Five personality traits in YouTube vlogs (i.e., video blogs). We use the Computer Expression Recognition Toolbox (CERT) system to characterize users of conversational vlogs. From CERT temporal signals corresponding to instantaneously recognized facial expression categories, we propose and derive four sets of behavioral cues that characterize face statistics and dynamics in a compact way. The cue sets are first used in a correlation analysis to assess the relevance of each facial expression of emotion with respect to Big-Five impressions obtained from crowd-observers watching vlogs, and also as features for automatic personality impression prediction. Using a dataset of 281 vloggers, the study shows that while multiple facial expression cues have significant correlation with several of the Big-Five traits, they are only able to significantly predict Extraversion impressions with moderate values of R 2 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.