In this paper, the impact of facial expressions on HRI is explored_ To determine their influence on empathy of a human towards a robot and perceived subjective performance, an experimental setup is created, in which participants engage in a dialog with the robot head EDDIE. The web-based gaming application "Akinator" serves as a backbone for the dialog structure. In this game, the robot tries to guess a thought of person chosen by the human by asking various questions about the person. In our experimental evaluation, the robot reacts in various ways to the human's facial expressions, either ignoring them, mirroring them, or displaying its own facial expression based on a psychological model for social awareness.In which way this robot behavior influences human perception of the interaction is investigated by a questionnaire. Our results support the hypothesis that the robot behavior during interaction heavily influences the extent of empathy by a human towards a robot and perceived subjective task-performance, with the adaptive modes clearly leading compared to the non adaptive mode.
Model-based techniques have proven to be successful in interpreting the large amount of information contained in images. Associated fitting algorithms search for the global optimum of an objective function, which should correspond to the best model fit in a given image. Although fitting algorithms have been the subject of intensive research and evaluation, the objective function is usually designed ad hoc, based on implicit and domain-dependent knowledge. In this article, we address the root of the problem by learning more robust objective functions. First, we formulate a set of desirable properties for objective functions and give a concrete example function that has these properties. Then, we propose a novel approach that learns an objective function from training data generated by manual image annotations and this ideal objective function. In this approach, critical decisions such as feature selection are automated, and the remaining manual steps hardly require domain-dependent knowledge. Furthermore, an extensive empirical evaluation demonstrates that the obtained objective functions yield more robustness. Learned objective functions enable fitting algorithms to determine the best model fit more accurately than with designed objective functions.
This paper introduces the Assistive Kitchen as a comprehensive demonstration and challenge scenario for technical cognitive systems. We describe its hardware and software infrastructure. Within the Assistive Kitchen application, we select particular domain activities as research subjects and identify the cognitive capabilities needed for perceiving, interpreting, analyzing, and executing these activities as research foci. We conclude by outlining open research issues that need to be solved to realize the scenarios successfully.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.