The ability to display emotions is a key feature in human communication and also for robots that are expected to interact with humans in social environments. For expressions based on Body Movement and other signals than facial expressions, like Sound, no common grounds have been established so far. Based on psychological research on human expression of emotions and perception of emotional stimuli we created eight different expressional designs for the emotions Anger, Sadness, Fear and Joy, consisting of Body Movements, Sounds and Eye Colors. In a large pre-test we evaluated the recognition ratios for the different expressional designs. In our main experiment we separated the expressional designs into their single cues (Body Movement, Sound, Eye Color) and evaluated their expressivity. The detailed view at the perception of our expressional cues, allowed us to evaluate the appropriateness of the stimuli, check our implementations for flaws and build a basis for systematical revision. Our analysis revealed that almost all Body Movements were appropriate for their target emotion and that some of our Sounds need a revision. Eye Colors could be identified as an unreliable component for emotional expression.
In an experiment, we tested whether the gender typicality of a human-robot interaction (HRI) task would affect the users' performance during HRI and the users' evaluation, acceptance and anthropomorphism of the robot. N = 73 participants (38 females and 35 males) performed either a stereotypically male or a stereotypically female task while being instructed by either a 'male' or a 'female' robot. Results revealed that gender typicality of the task significantly affected our dependent measures: More errors occurred when participants collaborated with the robot in the context of a stereotypically female work domain. Moreover, when participants performed a typically female task with the robot they were less willing to accept help from the robot in a future task and they anthropomorphized the robot to a lower extent. These effects were independent of robot and participant gender. Our findings demonstrate that the gender typicality of HRI tasks substantially influences HRI as well as humans' perceptions and acceptance of a robot.
This paper presents a study that allows users to define intuitive gestures to navigate a humanoid robot. For eleven navigational commands, 385 gestures, performed by 35 participants, were analyzed. The results of the study reveal user-defined gesture sets for both novice users and expert users. In addition, we present, a taxonomy of the userdefined gesture sets, agreement scores for the gesture sets, time performances of the gesture motions, and present implications to the design of the robot control, with a focus on recognition and user interfaces.
In social robotics, the behavior of humanoid robots is intended to be designed in a way that they behave in a human-like manner and serve as natural interaction partners for human users. Several aspects of human behavior such as speech, gestures, eye-gaze as well as the personal and social background of the user need therefore to be considered. In this paper, we investigate interpersonal distance as a behavioral aspect that varies with the cultural background of the user. We present two studies that explore whether users of different cultures (Arabs and Germans) expect robots to behave similar to their own cultural background. The results of the first study reveal that Arabs and Germans have different expectations on the interpersonal distance between themselves and robots in a static setting. In the second study, we use the results of the first study to investigate the users' reactions on robots using the observed interpersonal distances themselves. Although the data of this dynamic setting is not conclusive, it suggests that users prefer robots that show behavior that has been observed for their own cultural background before.
Grounding is an important process that underlies all human interaction. Hence, it is crucial for building social robots that are expected to collaborate effectively with humans. Gaze behavior plays versatile roles in establishing, maintaining and repairing the common ground. Integrating all these roles in a computational dialog model is a complex task since gaze is generally combined with multiple parallel information modalities and involved in multiple processes for the generation and recognition of behavior. Going beyond related work, we present a modeling approach focusing on these multi-modal, parallel and bi-directional aspects of gaze that need to be considered for grounding and their interleaving with the dialog and task management. We illustrate and discuss the different roles of gaze as well as advantages and drawbacks of our modeling approach based on a first user study with a technically sophisticated shared workspace application with a social humanoid robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.