Humans interpret and predict others' behaviors by ascribing intentions or beliefs, or in other words, by adopting the intentional stance. Since artificial agents are increasingly populating our daily environments, the question arises whether (and under which conditions) humans would apply the "human model" to understand the behaviors of these new social agents. Thus, in a series of three experiments, we tested whether embedding humans in a social interaction with a humanoid robot either displaying a human-like or machine-like behavior would modulate their initial tendency to adopt the intentional stance. Results showed that indeed humans are more prone to adopt the intentional stance after having interacted with a more socially available and human-like robot, while no modulation of the adoption of the intentional stance emerged toward a mechanistic robot. We conclude that short experiences with humanoid robots presumably inducing a "like-me" impression and social bonding increase the likelihood of adopting the intentional stance.
The present study highlights the benefits of using well-controlled experimental designs, grounded in experimental psychology research and objective neuroscientific methods, for generating progress in human-robot interaction (HRI) research. More specifically, we aimed at implementing a well-studied paradigm of attentional cueing through gaze (the so-called "joint attention" or "gaze cueing") in an HRI protocol involving the iCub robot. Similarly to documented results in gaze-cueing research, we found faster response times and enhanced eventrelated potentials of the EEG signal for discrimination of cued, relative to uncued, targets. These results are informative for the robotics community by showing that a humanoid robot with mechanistic eyes and human-like characteristics of the face is in fact capable of engaging a human in joint attention to a similar extent as another human would do. More generally, we propose that the methodology of combining neuroscience methods with an HRI protocol, contributes to understanding mechanisms of human social cognition in interactions with robots and to improving robot design, thanks to systematic and wellcontrolled experimentation tapping onto specific cognitive mechanisms of the human, such as joint attention.
Understanding the human cognitive processes involved in the interaction with artificial agents is crucial for designing socially capable robots. During social interactions, humans tend to explain and predict others' behavior adopting the intentional stance, that is, assuming that mental states drive behavior. However, the question of whether humans would adopt the same strategy with artificial agents remains unanswered. The present study aimed at identifying whether the type of behavior exhibited by the robot has an impact on the attribution of mentalistic explanations of behavior. We employed the Instance Questionnaire (ISQ) pre and post-observation of two types of behavior (decisive or hesitant). We found that decisive behavior, with rare and unexpected "hesitant" behaviors lead to more mentalistic attributions relative to behavior that was primarily hesitant. Findings suggest that higher expectations regarding the robots' capabilities and the characteristics of the behavior might lead to more mentalistic descriptions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.