International audienceThis paper evaluates the performance of face and speaker verification techniques in the context of a mobile environment. The mobile environment was chosen as it provides a realistic and challenging test-bed for biometric person verification techniques to operate. For instance the audio environment is quite noisy and there is limited control over the illumination conditions and the pose of the subject for the video. To conduct this evaluation, a part of a database captured during the " Mobile Biometry " (MOBIO) European Project was used. In total there were nine participants to the evaluation who submitted a face verification system and five participants who submitted speaker verification systems. The results have shown that the best performing face and speaker verification systems obtained the same level of performance, respectively 10.9% and 10.6% of HTER
Hand tracking is a fundamental task in a gesture recognition system. Most previous works tracked the hand position on color images and relied heavily on skin color information. However, color information is very vulnerable to lighting variations and skin color varies across difference human races. Furthermore, one can not effectively discriminate faces or other skin-color-like objects from hands when using skin color detection. In this paper, we propose a hand tracking algorithm that uses depth images only, and also a hand click detection method to initialize the hand tracking automatically. We show that depth images suffice and are advantageous to real-time hand tracking. A region growing technique is applied to segment the hand region on depth images. Then a mean-shift based algorithm accurately locates the hand center in the segmented hand region. The experimental results show that the proposed tracking algorithm runs at 300+ FPS, and the average error of the tracked 3D hand positions is less than 1 centimeter. The proposed method enables a plethora of potential applications to natural Human-Computer Interaction (HCI), and is adequate for embedded systems of consumer electronics because of its low complexity and low bandwidth requirement.
We propose a fully automatic system that detects and normalizes faces in images and recognizes their genders. To boost the recognition accuracy, we correct the in-plane and out-of-plane rotations of faces, and align faces based on estimated eye positions. To perform gender recognition, a face is first decomposed into several horizontal and vertical strips. Then, a regression function for each strip gives an estimation of the likelihood the strip sample belongs to a specific gender. The likelihoods from all strips are concatenated to form a new feature, based on which a gender classifier gives the final decision. The proposed approach achieved an accuracy of 88.1% in recognizing genders of faces in images collected from the World-Wide Web. For faces in the FERET dataset, our system achieved an accuracy of 98.8%, outperforming all the six state-of-the-art algorithms compared in this paper.
In this paper we propose a novel fusion strategy which fuses information from multiple physical traits via a cascading verification process. In the proposed system users are verified by each individual modules sequentially in turns of face, voice and iris, and would be accepted once he/she is verified by one of the modules without performing the rest of the verifications. Through adjusting thresholds for each module, the proposed approach exhibits different behavior with respect to security and user convenience. We provide a criterion to select thresholds for different requirements and we also design an user interface which helps users find the personalized thresholds intuitively. The proposed approach is verified with experiments on our in-house face-voice-iris database. The experimental results indicate that besides the flexibility between security and convenience, the proposed system also achieves better accuracy than its most accurate module.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.