The ability to understand and predict others’ behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others’ behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner.
Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing.
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.