Faces provide a complex source of information via invariant (e.g., race, sex and age) and variant (e.g., emotional expressions) cues. At present, it is not clear whether these different cues are processed separately or whether they interact. Using the Garner Paradigm, Experiment 1 confirmed that race, sex, and age cues affected the categorization of faces according to emotional expression whereas emotional expression had no effect on the categorization of faces by sex, age, or race. Experiment 2 used inverted faces and replicated this pattern of asymmetrical interference for race and age cues, but not for sex cues for which no interference on emotional expression categorization was observed. Experiment 3 confirmed this finding with a more stringently matched set of facial stimuli. Overall, this study shows that invariant cues interfere with the processing of emotional expressions. It indicates that the processing of invariant cues, but not of emotional expressions, is obligatory and that it precedes that of emotional expressions.
Previous research has shown that invariant facial features-for example, sex-and variant facial features-for example, emotional expressions-interact during face categorization. The nature of this interaction is a matter of dispute, however, and has been reported as either asymmetrical, such that sex cues influence emotion perception but emotional expressions do not affect the perception of sex, or symmetrical, such that sex and emotion cues each reciprocally influence the categorization of the other. In the present research, we identified stimulus set size as the critical factor leading to this disparity. Using faces drawn from different databases, in two separate experiments we replicated the finding of a symmetrical interaction between face sex and emotional expression when larger sets of posers were used. Using a subset of four posers, in the same setups, however, did not provide evidence for a symmetrical interaction, which is also consistent with prior research. This pattern of results suggests that different strategies may be used to categorize aspects of faces that are encountered repeatedly.
Pain is a fundamental human experience that triggers a range of social and psychological responses. In this study, we present behavioral and fMRI data to examine the effect of multiple group memberships salience on reported and neural indices of pain. We found that participants expressed higher levels of pain when more social group memberships were salient. This is consistent with the notion that pain itself motivates people to communicate their pain, and more so when multiple psychological resources are salient. In addition, fMRI results reveal an interesting twist: when participants increased their pain reporting as group memberships increased (from one group to four), there was a corresponding relative reduction in dorsal anterior cingulate cortex and anterior insula activation. These results provide evidence for an adaptive response to pain: the more people make use of the social resources at their disposal when experiencing pain, the less pain areas are activated.
Seventy-two university students participated in one of two experiments in the rapid serial visual presentation (RSVP) task. Distractors were four-letter non-words, the first target (T1) was a four-letter word, and the second target (T2) was a twofour-letter acronym that followed T1 at lags of 2, 3 and 5 items (Experiment 1) or lags of 3 and 5 items (Experiment 2). Familiar acronyms were identified better than unfamiliar acronyms. One pre-exposure of acronyms in a rating task improved RSVP accuracy for familiar acronyms only (Experiments 1 and 2), whereas three pre-exposures produced a benefit for unfamiliar acronyms and no significant additional benefit for familiar acronyms (Experiment 2). Thus a pre-existing unitised memory representation, and a relatively recent access of this representation, enhanced target identification. The effects of pre-exposure were more consistent with resource depletion than attentional filtering accounts of the attentional blink.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.