We introduce the WASABI 1 Affect Simulation Architecture, in which a virtual human's cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of "Core Affect" in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of each secondary emotion's connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by primary emotions. Results of an empirical study suggest that human players in a card game scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.
Abstract. In this paper the WASABI 1 Affect Simulation Architecture is introduced, in which a virtual human's cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of "Core Affect" in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is only subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of their connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by the primary emotions. An empirical study showed that human players in the Skip-Bo scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.
Humans use very sophisticated ways of bodily emotion expression combining facial expressions, sound, gestures and full body posture. Like others, we want to apply these aspects of human communication to ease the interaction between robots and users. In doing so we believe there is a need to consider what abstraction of human social communicative behaviors is appropriate for robots. The study reported in this paper is a pilot study to not offer simulated emotion but to offer an abstracted robot version of emotion expressions and an evaluation to what extent users interpret these robot expressions as the intended emotional states. To this end, we present the mobile, mildly humanized robot Daryl, for which we created six motion sequences that combine human-like, animal-like, and robot-specific social cues. The results of a user study (N=29) show that despite the absence of facial expressions and articulated extremities, subjects' interpretation of Daryl's emotional states were congruent with the abstracted emotion display. These results demonstrate that abstract displays of emotion that combine human-like, animal-like, and robot-specific modalities could in fact be an alternative to complex facial expressions and will feed into ongoing work identifying robot-specific social cues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.