Four experiments investigated matching of unfamiliar target faces taken from high-quality video against arrays of photographs. In Experiment 1, targets were present in 50% of arrays. Accuracy was poor and worsened when viewpoint and expression differed between target and array faces. In Experiment 2, targets were present in every array, but performance remained highly error prone. In Experiment 3, short video clips of the targets were shown and replayed as often as necessary, but performance levels were only slightly better than Experiment 2. Experiment 4 showed that matching was dominated by external face features. The results urge caution in the use of video images to identify people who have committed crimes. Superficial impressions of resemblance or dissimilarity between face images can be highly misleading.The human face provides the most reliable means of person identification available to the human eye (although fingerprints and iris patterns may prove more useful for automated identification; e.g., seeDaugman, 1998). Nonethe-
SummaryPeople are excellent at identifying faces familiar to them, even from very low quality images, but are bad at recognising, or even matching, faces that are unfamiliar. In this review we shall consider some of the factors which affect our abilities to match unfamiliar faces. Major differences in orientation (e.g. inversion) or greyscale information (e.g. negation) affect face processing dramatically, and such effects are informative about the nature of the representations derived from unfamiliar faces, suggesting that these are based on relatively low-level image descriptions. Consistent with this, even relatively minor differences in lighting and viewpoint create problems for human face matching, leading to potentially important problems over the use of images from security video images. The relationships between different parts of the face (its "configuration") are as important to the impression created of an upright face as local features themselves, suggesting further constraints on the representations derived from faces. The review then turns to consider what computer face recognition systems may contribute to understanding both the theory and the practical problems of face identification. Computer systems can be used as an aid to person identification, but also in an attempt to model human perceptual processes. There are many approaches to computer recognition of faces, including ones based on low-level image analysis of whole face images, which have potential as models of human performance. Some systems show significant correlations with human perceptions of the same faces, for example recognising distinctive faces more easily. In some circumstances, some systems may exceed human abilities on unfamiliar faces. Finally, we look to the future of work in this area, that will incorporate motion and three-dimensional shape information.
The genetic disorder Williams syndrome (WS) is associated with a propulsion towards social stimuli and interactions with people. In contrast, the neuro-developmental disorder autism is characterised by social withdrawal and lack of interest in socially relevant information. Using eye-tracking techniques we investigate how individuals with these two neuro-developmental disorders associated with distinct social characteristics view scenes containing people. The way individuals with these disorders view social stimuli may impact upon successful social interactions and communication. Whilst individuals with autism spend less time than is typical viewing people and faces in static pictures of social interactions, the opposite is apparent for those with WS whereby exaggerated fixations are prevalent towards the eyes. The results suggest more attention should be drawn towards understanding the implications of atypical social preferences in WS, in the same way that attention has been drawn to the social deficits associated with autism.
We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple imageaveraging technique to derive abstract representations of known faces. Using Principal Components Analysis, we show that computational systems based on these averages consistently outperform systems based on collections of instances. Furthermore, the quality of the average improves as more images are used to derive it. These simulations are carried out with famous faces, over which we had no control of superficial image characteristics. We then present data from three experiments demonstrating that image averaging can also improve recognition by human observers. Finally, we describe how PCA on image averages appears to preserve identity-specific face information, while eliminating non-diagnostic pictorial information. We therefore suggest that this is a good candidate for a robust face representation.2 Robust representations for face recognition: the power of averages
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.