Abstract-One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system.
Esta es la versión de autor de la comunicación de congreso publicada en: This is an author produced version of a paper published in:
ABSTRACTThis work studies the use of deep neural networks (DNNs) to address automatic language identification (LID). Motivated by their recent success in acoustic modelling, we adapt DNNs to the problem of identifying the language of a given spoken utterance from short-term acoustic features. The proposed approach is compared to state-of-the-art i-vector based acoustic systems on two different datasets: Google 5M LID corpus and NIST LRE 2009. Results show how LID can largely benefit from using DNNs, especially when a large amount of training data is available. We found relative improvements up to 70%, in C avg , over the baseline system.
Abstract-A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.
A novel score-level fusion strategy based on quality measures for multimodal biometric authentication is presented. In the proposed method, the fusion function is adapted every time an authentication claim is performed based on the estimated quality of the sensed biometric signals at this time. Experimental results combining written signatures and quality-labelled fingerprints are reported. The proposed scheme is shown to outperform significantly the fusion approach without considering quality signals. In particular, a relative improvement of approximately 20% is obtained on the publicly available MCYT bimodal database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.