In the present study, we investigated whether faces have an advantage in retaining attention over other stimulus categories. In three experiments, subjects were asked to focus on a central go/no-go signal before classifying a concurrently presented peripheral line target. In Experiment 1, the go/no-go signal could be superimposed on photographs of upright famous faces, matching inverted faces, or meaningful objects. Experiments 2 and 3 tested upright and inverted unfamiliar faces, printed names, and another class of meaningful objects in an identical design. A fourth experiment provided a replication of Experiment 1, but with a 1,000-msec stimulus onset asynchrony between the onset of the central face/nonface stimuli and the peripheral targets. In all the experiments, the presence of an upright face significantly delayed target response times, in comparison with each of the other stimulus categories. These results suggest a general attentional bias, so that it is particularly difficult to disengage processing resources from faces.
This study presents the Kent Face Matching Test (KFMT), which comprises 200 same-identity and 20 different-identity pairs of unfamiliar faces. Each face pair consists of a photograph from a student ID card and a high-quality portrait that was taken at least three months later. The test is designed to complement existing resources for face-matching research, by providing a more ecologically valid stimulus set that captures the natural variability that can arise in a person's appearance over time. Two experiments are presented to demonstrate that the KFMT provides a challenging measure of face matching but correlates with established tests. Experiment 1 compares a short version of this test with the optimized Glasgow Face Matching Test (GFMT). In Experiment 2, a longer version of the KFMT, with infrequent identity mismatches, is correlated with performance on the Cambridge Face Memory Test (CFMT) and the Cambridge Face Perception Test (CFPT). The KFMT is freely available for use in face-matching research.
In everyday life, human faces are encountered in many different views. Despite this fact, most psychological research has focused on the perception of frontal faces. To address this shortcoming, the current study investigated how different face views are processed, by measuring eye movements to frontal, mid-profile and profile faces during a gender categorization (Experiment 1) and a free-viewing task (Experiment 2). In both experiments observers initially fixated the geometric center of a face, independent of face view. This center-of-gravity effect induced a qualitative shift in the features that were sampled across different face views in the time period immediately after stimulus onset. Subsequent eye fixations focused increasingly on specific facial features. At this stage, the eye regions were targeted predominantly in all face views, and to a lesser extent also the nose and the mouth. These findings show that initial saccades to faces are driven by general stimulus properties, before eye movements are redirected to the specific facial features in which observers take an interest. These findings are illustrated in detail by plotting the distribution of fixations, first fixations, and percentage fixations across time.
In laboratory studies of visual perception, images of natural scenes are routinely presented on a computer screen. Under these conditions, observers look at the center of scenes first, which might reflect an advantageous viewing position for extracting visual information. This study examined an alternative possibility, namely that initial eye movements are drawn towards the center of the screen. Observers searched visual scenes in a person detection task, while the scenes were aligned with the screen center or offset horizontally (Experiment 1). Two central viewing effects were observed, reflecting early visual biases to the scene and the screen center. The scene effect was modified by person content but is not specific to person detection tasks, while the screen bias cannot be explained by the low-level salience of a computer display (Experiment 2). These findings support the notion of a central viewing tendency in scene analysis, but also demonstrate a bias to the screen center that forms a potential artifact in visual perception experiments.
Previous research has demonstrated an interaction between eye gaze and selected facial emotional expressions, whereby the perception of anger and happiness is impaired when the eyes are horizontally averted within a face, but the perception of fear and sadness is enhanced under the same conditions. The current study reexamined these claims over six experiments. In the first three experiments, the categorization of happy and sad expressions (Experiments 1 and 2) and angry and fearful expressions (Experiment 3) was impaired when eye gaze was averted, in comparison to direct gaze conditions. Experiment 4 replicated these findings in a rating task, which combined all four expressions within the same design. Experiments 5 and 6 then showed that previous findings, that the perception of selected expressions is enhanced under averted gaze, are stimulus and task-bound. The results are discussed in relation to research on facial expression processing and visual attention
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.