The performance of fingerprint comparison algorithms depends on the reliability and accuracy of the features extracted from the fingerprints. The accuracy of the feature extraction algorithms is assumed to depend on the quality of the fingerprint images. Especially, low-quality images can be challenging for feature extraction algorithms. Image enhancement may allow to extract features more accurately. There is a lack of extensive and quantitative evaluation of image enhancement methods. This study investigates the impact of seven typical image enhancement methods on biometric sample quality and on biometric performance. The interrelation of image quality and biometric performance is investigated on 14 datasets. Biometric quality measures are estimated based on image quality metrics NFIQ1 and NFIQ2.0. Biometric performance is tested using MINDTCT and FingerJetFX for feature extraction and BOZORTH3 for biometric comparison. This work shows that the biometric performance can be improved by image enhancement. The significance of improvements depends on both the quality of the datasets and the feature extraction. Thus, there is no single best improvement algorithm. A correlation of changes in scores and image qualities can only be found on the level of entire datasets. No significant correlation can be found for single biometric comparisons.
Identity documents (or IDs) play an important role in verifying the identity of a person with wide applications in banks, travel, video‐identification services and border controls. Replay or photocopied ID cards can be misused to pass ID control in unsupervised scenarios if the liveness of a person is not checked. To detect such presentation attacks on ID card verification process when presented virtually is a critical step for the biometric systems to assure authenticity. In this paper, a pixel‐wise supervision on DenseNet is proposed to detect presentation attacks of the printed and digitally replayed attacks. The authors motivate the approach to use pixel‐wise supervision to leverage minute cues on various artefacts such as moiré patterns and artefacts left by the printers. The baseline benchmark is presented using different handcrafted and deep learning models on a newly constructed in‐house database obtained from an operational system consisting of 886 users with 433 bona fide, 67 print and 366 display attacks. It is demonstrated that the proposed approach achieves better performance compared to handcrafted features and Deep Models with an Equal Error Rate of 2.22% and Bona fide Presentation Classification Error Rate (BPCER) of 1.83% and 1.67% at Attack Presentation Classification Error Rate of 5% and 10%.
Nowadays, several biometric databases already contain millions of entries of individuals. With an increasing number of enrolled individuals, the response time of queries grows and can become critical. Fingerprint indexing offers a set of techniques to reduce the workload of entries, which have to be compared thoroughly. This work surveys research on such techniques. It focuses on the features of fingerprints, which are used as input. This survey also provides an assessment of the quality of the body of research in this field. Deficiencies herein are identified, e.g. there is a lack of common datasets and metrics used for testing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.