SignificanceThis study measures face identification accuracy for an international group of professional forensic facial examiners working under circumstances that apply in real world casework. Examiners and other human face “specialists,” including forensically trained facial reviewers and untrained superrecognizers, were more accurate than the control groups on a challenging test of face identification. Therefore, specialists are the best available human solution to the problem of face identification. We present data comparing state-of-the-art face recognition technology with the best human face identifiers. The best machine performed in the range of the best humans: professional facial examiners. However, optimal face identification was achieved only when humans and machines worked in collaboration.
The other-race effect was examined in a series of experiments and simulations that looked at the relationships among observer ratings of typicality, familiarity, attractiveness, memorability, and the performance variables of d' and criterion. Experiment 1 replicated the other-race effect with our Caucasian and Japanese stimuli for both Caucasian and Asian observers. In Experiment 2, we collected ratings from Caucasian observers on the faces used in the recognition task. A Varimax-rotated principal components analysis on the rating and performance data for the Caucasian faces replicated Vokey and Read's (1992) finding that typicality is composed of two orthogonal components, dissociable via their independent relationships to: (1) attractiveness and familiarity ratings and (2) memorability ratings. For Japanese faces, however, we found that typicality was related only to memorability. Where performance measures were concerned, two additional principal components dominated by criterion and by d' emerged for Caucasian faces. For the Japanese faces, however, the performance measures of d' and criterion merged into a single component that represented a second component of typicality, one orthogonal to the memorability-dominated component. A measure of face representation quality extracted from an autoassociative neural network trained with a majority of Caucasian faces and a minority of Japanese faces was incorporated into the principal components analysis. For both Caucasian and Japanese faces, the neural network measure related both to memorability ratings and to human accuracy measures. Combined, the human data and simulation results indicate that the memorability component oftypicality may be related to small, local, distinctive features, whereas the attractiveness/familiarity component may be more related to the global, shape-based properties of the face.For many years, it has been suspectedthat faces of one's own race are recognized more accurately than faces of other races (Feingold, 1914). Indeed, there is abundant empirical evidence for this other-race phenomenon, as two recent metaanalyses of the face recognition literature attest (Bothwell, Brigham, & Malpass, 1989;Shapiro & Penrod, 1986). In addition to the empirical support for this phenomenon, the other-race effect is widely known outside of the laboratory. Deffenbacher and Loftus (1982), Thanks are due June Chance and AI Goldstein for providing the Caucasian and Japanese faces used in the present experiments and simulations and to James C. Bartlett for helpful comments throughout the entire course of the project. We would like to thank also James I. Chumbley, John W. Shepherd, and John R. Vokey for very helpful comments on a previous version of the manuscript. We are grateful also to Barbara Edwards and to Ray Zhu for assistance in testing subjects. Requests for reprints should be sent to A. J. O'Toole, School of Human Development, GR4.1, University of Texas at Dallas, Richardson, TX 75083-0688 (e-mail: otoole@utdallas.edu). -Accepted by previous editor...
This report describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looks at recognition from high-resolution still images and three-dimensional (3D) face images, and measures performance for still images taken under controlled and uncontrolled illumination. The ICE 2006 reports iris recognition performance from left and right iris images. The FRVT 2006 results from controlled still images and 3D images document an order-of-magnitude improvement in recognition performance over the FRVT 2002. This order-of-magnitude improvement was one of the goals of the preceding technology development effort, the Face Recognition Grand Challenge (FRGC). The FRVT 2006 and the ICE 2006 compared recognition performance from very-high resolution still face images, 3D face images, and single-iris images. On the FRVT 2006 and the ICE 2006 datasets, recognition performance was comparable for all three biometrics. In an experiment comparing human and algorithm performance, the best-performing face recognition algorithms were more accurate than humans. These and other results are discussed in detail.
Familiar faces are represented with rich visual, semantic, and emotional codes that support nearly effortless perception and recognition of these faces. Unfamiliar faces pose a greater challenge to human perception and memory systems. The established behavioural disparities for familiar and unfamiliar faces undoubtedly stem from differences in the quality and nature of their underlying neural representations. In this review, our goal is to characterize what is known about the neural pathways that respond to familiar and unfamiliar faces using data from functional neuroimaging studies. We divide our presentation by type of familiarity (famous, personal, and visual familiarity) to consider the distinct neural underpinnings of each. We conclude with a description of a recent model of person information proposed by Gobbini and Haxby (2007) and a list of open questions and promising directions for future research.
The perception of face gender was examined in the context of extending "face space" models of human face representations to include the perceptual categories defined by male and female faces. We collected data on the recognizability, gender classifiability (reaction time to classify a face as male/female), attractiveness, and masculinity/femininity of individual male and female faces. Factor analyses applied separately to the data for male and female faces yielded the following results. First, for both male and female faces, the recognizability and gender classifiability of faces were independent-a result inconsistent with the hypothesis that both recognizability and gender classifiability depend on a face's "distance" from the subcategory gender prototype. Instead, caricatured aspects of gender (femininity/masculinity ratings) related to the gender classifiability of the faces. Second, facial attractiveness related inversely to face recognizability for male, but not for female, faces-a result that resolves inconsistencies in previous studies. Third, attractiveness and femininity for female faces were nearly equivalent, but attractiveness and masculinity for male faces were not equivalent. Finally,we applied principal component analysis to the pixel-coded face images with the aim of extracting measures related to the gender classifiability and recognizability of individual faces. We incorporated these model-derived measures into the factor analysis with the human rating and performance measures. This combined analysis indicated that face recognizability is related to the distinctiveness of a face with respect to its gender subcategory prototype. Additionally, the gender classifiability offaces related to at least one caricatured aspect of face gender.Human faces provide us with a plethora of information that is valuable and necessary for social interaction. When we encounter a face, we can quickly and efficiently decide whether it is one we know. For faces of persons we know, we can often retrieve semantic and identity information about the person. Additionally, from both familiar and unfamiliar faces we can make judgments about the gender, approximate age, and race of the person. The information we use to accomplish these latter judgments has been referred to by Bruce and Young (1986) in their model offace processing as "visually derived semantic" informaThis work was supported in part by NIMH Grant MH5 I 765-02 to A.J.O. Thanks are due to June Chance and Al Goldstein for providing the faces used in the present experiments and simulations and to James
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.