It is generally thought that faces are perceived as indissociable wholes. As a result, many assume that hiding large portions of the face by the addition of noise or by masking limits or qualitatively alters natural "expert" face processing by forcing observers to use atypical processing mechanisms. We addressed this question by measuring face processing abilities with whole faces and with Bubbles (Gosselin & Schyns, 2001), an extreme masking method thought by some to bias the observers toward the use of atypical processing mechanisms by limiting the use of whole-face strategies. We obtained a strong and negative correlation between individual face processing ability and the number of bubbles (r = -.79), and this correlation remained strong even after controlling for general visual/cognitive processing ability (rpartial = -.72). In other words, the better someone is at processing faces, the fewer facial parts they need to accurately carry out this task. Thus, contrary to what many researchers assume, face processing mechanisms appear to be quite insensitive to the visual impoverishment of the face stimulus.
Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.
The fact that a mere glance makes it possible to extract a wealth of information about the person being observed is testament to both the salience of the human face and the brain’s high efficiency in processing this information. Prior work has revealed that social judgments of faces are determined by facial features that vary on two orthogonal dimensions: trustworthiness and dominance. We conducted two experiments to investigate the visual information subtending trustworthiness and dominance judgments. In Experiment 1, we used the Bubbles technique to identify the facial areas and the spatial frequencies that modulate these two judgments. Our results show that the eye and mouth areas in high-to-medium spatial frequency bands were positively correlated with judgments of trustworthiness; the eyebrows region in medium-to-low frequency bands was positively correlated with judgments of dominance; and the lower left jawbone in medium-to-low frequency bands was negatively correlated with judgments of dominance. In Experiment 2, we used the results of Experiment 1 to induce subtle variations in the relative contrast of different facial areas, and showed that it is possible to rig social perception using such a manipulation.
Facial expressions of emotion play a key role in social interactions. While in everyday life, their dynamic and transient nature calls for a fast processing of the visual information they contain, a majority of studies investigating the visual processes underlying their recognition have focused on their static display. The present study aimed to gain a better understanding of these processes while using more ecological dynamic facial expressions. In two experiments, we directly compared the spatial frequency (SF) tuning during the recognition of static and dynamic facial expressions. Experiment 1 revealed a shift toward lower SFs for dynamic expressions in comparison to static ones. Experiment 2 was designed to verify if changes in SF tuning curves were specific to the presence of emotional information in motion by comparing the SF tuning profiles for static, dynamic, and shuffled dynamic expressions. Results showed a similar shift toward lower SFs for shuffled expressions, suggesting that the difference found between dynamic and static expressions might not be linked to informative motion
per se
but to the presence of motion regardless its nature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.