Morphing attacks have posed a severe threat to Face Recognition System (FRS). Despite the number of advancements reported in recent works, we note serious open issues such as independent benchmarking, generalizability challenges and considerations to age, gender, ethnicity that are inadequately addressed. Morphing Attack Detection (MAD) algorithms often are prone to generalization challenges as they are database dependent. The existing databases, mostly of semi-public nature, lack in diversity in terms of ethnicity, various morphing process and post-processing pipelines. Further, they do not reflect a realistic operational scenario for Automated Border Control (ABC) and do not provide a basis to test MAD on unseen data, in order to benchmark the robustness of algorithms. In this work, we present a new sequestered dataset for facilitating the advancements of MAD where the algorithms can be tested on unseen data in an effort to better generalize. The newly constructed dataset consists of facial images from 150 subjects from various ethnicities, age-groups and both genders. In order to challenge the existing MAD algorithms, the morphed images are with careful subject pre-selection created from the contributing images, and further post-processed to remove morphing artifacts. The images are also printed and scanned to remove all digital cues and to simulate a realistic challenge for MAD algorithms. Further, we present a new online evaluation platform to test algorithms on sequestered data. With the platform we can benchmark the morph detection performance and study the generalization ability. This work also presents a detailed analysis on various subsets of sequestered data and outlines open challenges for future directions in MAD research.
In this paper we present a new robust approach for 3D face registration to an intrinsic coordinate system of the face. The intrinsic coordinate system is defined by the vertical symmetry plane through the nose, the tip of the nose and the slope of the bridge of the nose. In addition, we propose a 3D face classifier based on the fusion of many dependent region classifiers for overlapping face regions. The region classifiers use PCA-LDA for feature extraction and the likelihood ratio as a matching score. Fusion is realised using straightforward majority voting for the identification scenario. For verification, a voting approach is used as well and the decision is defined by comparing the number of votes to a threshold. Using the proposed registration method combined with a classifier consisting of 60 fused region classifiers we obtain a 99.0% identification rate on the all vs first identification test of the FRGC v2 data. A verification rate of 94.6% at FAR = 0.1% was obtained for the all vs all verification test on the FRGC v2 data using fusion of 120 region classifiers. The first is the highest reported performance and the second is in the top-5 of best performing systems on these tests. In addition, our approach is much faster than other methods, taking only 2.5 seconds per image for registration and less than 0.1 ms per comparison. Because we apply feature extraction using PCA and LDA, the resulting template size is also very small: 6 kB for 60 region classifiers.
Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs’ faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance.
Abstract-Blood pool agents (BPAs) for contrast-enhanced (CE) magnetic-resonance angiography (MRA) allow prolonged imaging times for higher contrast and resolution. Imaging is performed during the steady state when the contrast agent is distributed through the complete vascular system. However, simultaneous venous and arterial enhancement in this steady state hampers interpretation. In order to improve visualization of the arteries and veins from steady-state BPA data, a semiautomated method for artery-vein separation is presented. In this method, the central arterial axis and central venous axis are used as initializations for two surfaces that simultaneously evolve in order to capture the arterial and venous parts of the vasculature using the level-set framework. Since arteries and veins can be in close proximity of each other, leakage from the evolving arterial (venous) surface into the venous (arterial) part of the vasculature is inevitable. In these situations, voxels are labeled arterial or venous based on the arrival time of the respective surface. The evolution is steered by external forces related to feature images derived from the image data and by internal forces related to the geometry of the level sets. In this paper, the robustness and accuracy of three external forces (based on image intensity, image gradient, and vessel-enhancement filtering) and combinations of them are investigated and tested on seven patient datasets. To this end, results with the level-set-based segmentation are compared to the reference-standard manually obtained segmentations. Best results are achieved by applying a combination of intensity-and gradient-based forces and a smoothness constraint based on the curvature of the surface. By applying this combination to the seven datasets, it is shown that, with minimal user interaction, artery-vein separation for improved arterial and venous visualization in BPA CE-MRA can be achieved.Index Terms-Artery-vein separation (AV separation), blood pool agent (BPA), contrast-enhanced magentic-resonance angiography (CE-MRA), level set, separation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.