Attention orienting towards a gazed-at location is fundamental to social attention. Whether gaze cues can interact with emotional expressions other than those signaling environmental threat to modulate this gaze cuing, and whether this integration changes over time, remains unclear. With four experiments we demonstrate that, when perceived motion inherent to dynamic displays is controlled for, gaze cuing is enhanced by both fearful and happy faces compared to neutral faces. This enhancement is seen with stimulus-onset-asynchronies ranging from 200-700ms. Thus, gaze cuing can be reliably modulated by positive expressions, albeit to a smaller degree than fearful ones, and this gaze-emotion integration impacts behaviour as early as 200ms post-cue onset.
Gaze-cuing refers to the spontaneous orienting of attention towards a gazed-at location, characterised by shorter response times to gazed-at than non-gazed at targets. Previous research suggests that processing of these gaze cues interacts with the processing of facial expression cues to enhance gaze-cuing. However, whether only negative emotions (which signal potential threat or uncertainty) can enhance gaze-cuing is still debated, and whether this emotional modulation varies as a function of individual differences still remains largely unclear. Combining data from seven experiments, we investigated the emotional modulation of gaze-cuing in the general population as a function of participant sex, and self-reported subclinical trait anxiety, depression, and autistic traits. We found that (i) emotional enhancement of gaze-cuing can occur for both positive and negative expressions, (ii) the higher the score on the Attention to Detail subscale of the Autism Spectrum Quotient, the smaller the emotional enhancement of gaze-cuing, especially for happy expressions, and (iii) emotional modulation of gaze-cuing does not vary as a function of participant anxiety, depression or sex, although women display an overall larger gaze-cuing effect than men.
With the widespread adoption of masks, there is a need for understanding how facial obstruction affects emotion recognition. We asked 120 participants to identify emotions from faces with and without masks. We also examined if recognition performance was related to autistic traits and personality. Masks impacted recognition of expressions with diagnostic lower face features the most and those with diagnostic upper face features the least. Persons with higher autistic traits were worse at identifying unmasked expressions, while persons with lower extraversion and higher agreeableness were better at recognizing masked expressions. These results show that different features play different roles in emotion recognition and suggest that obscuring features affects social communication differently as a function of autistic traits and personality.
The gaze cueing effect is characterized by faster attentional orienting to a gazed-at than a non-gazed-at target. This effect is often enhanced when the gazing face bears an emotional expression, though this finding is modulated by a number of factors. Here, we tested whether the type of task performed might be one such modulating factor. Target localization and target discrimination tasks are the two most commonly used gaze cueing tasks, and they arguably differ in cognitive resources, which could impact how emotional expression and gaze cues are integrated to orient attention. In a within-subjects design, participants performed both target localization and discrimination gaze cueing tasks with neutral, happy, and fearful faces. The gaze cueing effect for neutral faces was greatly reduced in the discrimination task relative to the localization task, and the emotional enhancement of the gaze cueing effect was only present in the localization task and only when this task was performed first. These results suggest that cognitive resources are needed for gaze cueing and for the integration of emotional expressions and gaze cues. We propose that a shift toward local processing may be the mechanism by which the discrimination task interferes with the emotional modulation of gaze cueing. The results support the idea that gaze cueing can be greatly modulated by top-down influences and cognitive resources and thus taps into endogenous attention. Results are discussed within the context of the recently proposed EyeTune model of social attention.
Most face processing research has investigated how we perceive faces presented by themselves, but we view faces everyday within a rich social context. Recent ERP research has demonstrated that context cues, including self-relevance and valence, impact electrocortical and emotional responses to neutral faces. However, the time-course of these effects is still unclear, and it is unknown whether these effects interact with the face gaze direction, a cue that inherently contains self-referential information and triggers emotional responses. We primed direct and averted gaze neutral faces (gaze manipulation) with contextual sentences that contained positive or negative opinions (valence manipulation) about the participants or someone else (self-relevance manipulation). In each trial, participants rated how positive or negative, and how affectively aroused, the face made them feel. Eye-tracking ensured sentence reading and face fixation while ERPs were recorded to face presentations. Faces put into self-relevant contexts were more arousing than those in other-relevant contexts, and elicited ERP differences from 150 to 750 ms post-face, encompassing EPN and LPP components. Self-relevance interacted with valence at both the behavioural and ERP level starting 150 ms post-face. Finally, faces put into positive, self-referential contexts elicited different N170 ERP amplitudes depending on gaze direction. Behaviourally, direct gaze elicited more positive valence ratings than averted gaze during positive, self-referential contexts. Thus, self-relevance and valence contextual cues impact visual perception of neutral faces and interact with gaze direction during the earliest stages of face processing. The results highlight the importance of studying face processing within contexts mimicking the complexities of real world interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.