The coordination of attention between individuals is a fundamental part of everyday human social interaction. Previous work has focused on the role of gaze information for guiding responses during joint attention episodes. However, in many contexts, hand gestures such as pointing provide another valuable source of information about the locus of attention. The current study developed a novel virtual reality paradigm to investigate the extent to which initiator gaze information is used by responders to guide joint attention responses in the presence of more visually salient and spatially precise pointing gestures. Dyads were instructed to use pointing gestures to complete a cooperative joint attention task in a virtual environment. Eye and hand tracking enabled real-time interaction and provided objective measures of gaze and pointing behaviours. Initiators displayed gaze behaviours that were spatially congruent with the subsequent pointing gestures. Responders overtly attended to the initiator’s gaze during the joint attention episode. However, both these initiator and responder behaviours were highly variable across individuals. Critically, when responders did overtly attend to their partner’s face, their saccadic reaction times were faster when the initiator’s gaze was also congruent with the pointing gesture, and thus predictive of the joint attention location. These results indicate that humans attend to and process gaze information to facilitate joint attention responsivity, even in contexts where gaze information is implicit to the task and joint attention is explicitly cued by more spatially precise and visually salient pointing gestures.
The human brain has evolved specialised mechanisms to enable the rapid detection of threat cues, including emotional face expressions (e.g., fear and anger). However, contextual cues – such as gaze direction – influence the ability to recognise emotional expressions. For instance, anger paired with direct gaze, and fear paired with averted gaze are more accurately recognised compared to alternate conjunctions of these features. It is argued that this is because gaze direction conveys the relevance and locus of the threat to the observer. Here, we used continuous flash suppression (CFS) to assess whether the modulatory effect of gaze direction on emotional face processing occurs outside of conscious awareness. Previous research using CFS has demonstrated that fearful facial expressions are prioritised by the visual system and gain privileged access to awareness over other expressed emotions. We hypothesised that if the modulatory effects of gaze on emotional face processing occur also at this level, then the gaze-emotion conjunctions signalling self-relevant threat will reach awareness faster than those that do not. We report that fearful faces gain privileged access to awareness over angry faces, but that gaze direction does not modulate this effect. Thus, our findings suggest that previously reported effects of gaze direction on emotional face processing are likely to occur once the face is detected, where the self-relevance and locus of the threat can be consciously appraised.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.