Morphing attacks have posed a severe threat to Face Recognition System (FRS). Despite the number of advancements reported in recent works, we note serious open issues such as independent benchmarking, generalizability challenges and considerations to age, gender, ethnicity that are inadequately addressed. Morphing Attack Detection (MAD) algorithms often are prone to generalization challenges as they are database dependent. The existing databases, mostly of semi-public nature, lack in diversity in terms of ethnicity, various morphing process and post-processing pipelines. Further, they do not reflect a realistic operational scenario for Automated Border Control (ABC) and do not provide a basis to test MAD on unseen data, in order to benchmark the robustness of algorithms. In this work, we present a new sequestered dataset for facilitating the advancements of MAD where the algorithms can be tested on unseen data in an effort to better generalize. The newly constructed dataset consists of facial images from 150 subjects from various ethnicities, age-groups and both genders. In order to challenge the existing MAD algorithms, the morphed images are with careful subject pre-selection created from the contributing images, and further post-processed to remove morphing artifacts. The images are also printed and scanned to remove all digital cues and to simulate a realistic challenge for MAD algorithms. Further, we present a new online evaluation platform to test algorithms on sequestered data. With the platform we can benchmark the morph detection performance and study the generalization ability. This work also presents a detailed analysis on various subsets of sequestered data and outlines open challenges for future directions in MAD research.
Data privacy is crucial when dealing with biometric data. Accounting for the latest European data privacy regulation and payment service directive, biometric template protection is essential for any commercial application. Ensuring unlinkability across biometric service operators, irreversibility of leaked encrypted templates, and renewability of e.g., voice models following the i-vector paradigm, biometric voice-based systems are prepared for the latest EU data privacy legislation. Employing Paillier cryptosystems, Euclidean and cosine comparators are known to ensure data privacy demands, without loss of discrimination nor calibration performance. Bridging gaps from template protection to speaker recognition, two architectures are proposed for the two-covariance comparator, serving as a generative model in this study. The first architecture preserves privacy of biometric data capture subjects. In the second architecture, model parameters of the comparator are encrypted as well, such that biometric service providers can supply the same comparison modules employing different key pairs to multiple biometric service operators. An experimental proof-of-concept and complexity analysis is carried out on the data from the 2013 -2014 NIST i-vector machine learning challenge.
Nowadays, many facial images are acquired using smart phones. To ensure the best outcome, users frequently retouch these images before sharing them, e.g. via social media. Modifications resulting from used retouching algorithms might be a challenge for face recognition technologies. Towards deploying robust face recognition as well as enforcing anti-photoshop legislations, a reliable detection of retouched face images is needed. In this work, the effects of facial retouching on face recognition are investigated. A qualitative assessment of 32 beautification apps is conducted. Based on this assessment five apps are chosen which are used to create a database of 800 beautified face images. Biometric performance is measured before and after retouching using a commercial face recognition system. Subsequently, a retouching detection system based on the analysis of photo response non-uniformity (PRNU) is presented. Specifically, scores obtained from analysing spatial and spectral features extracted from PRNU patterns across image cells are fused. In a scenario, in which unaltered bona fide images are compressed to the average sizes of the retouched images using JPEG, the proposed PRNU-based detection scheme is shown to robustly distinguish between bona fide and retouched images achieving an average detection equal error rate of 13.7% across all retouching algorithms. Fig. 1 Application of a beautification app: (a) original image, (b) retouched image, and (c) main differences between (a) and (b) (a) Original, (b) Retouched, (c) Differences
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.