Abstract. We present EmoVoice, a framework for emotional speech corpus and classifier creation and for offline as well as real-time online speech emotion recognition. The framework is intended to be used by non-experts and therefore comes with an interface to create an own personal or application specific emotion recogniser. Furthermore, we describe some applications and prototypes that already use our framework to track online emotional user states from voice information.
The ability to display emotions is a key feature in human communication and also for robots that are expected to interact with humans in social environments. For expressions based on Body Movement and other signals than facial expressions, like Sound, no common grounds have been established so far. Based on psychological research on human expression of emotions and perception of emotional stimuli we created eight different expressional designs for the emotions Anger, Sadness, Fear and Joy, consisting of Body Movements, Sounds and Eye Colors. In a large pre-test we evaluated the recognition ratios for the different expressional designs. In our main experiment we separated the expressional designs into their single cues (Body Movement, Sound, Eye Color) and evaluated their expressivity. The detailed view at the perception of our expressional cues, allowed us to evaluate the appropriateness of the stimuli, check our implementations for flaws and build a basis for systematical revision. Our analysis revealed that almost all Body Movements were appropriate for their target emotion and that some of our Sounds need a revision. Eye Colors could be identified as an unreliable component for emotional expression.
Abstract. We investigate the usability of an eye controlled writing interface that matches the nature of human eye gaze, which always moves and is not immediately able to trigger the selection of a button. Such an interface allows the eye continuously to move and it is not necessary to dwell upon a specific position to trigger a command. We classify writing into three categories (typing, gesturing, and continuous writing) and explain why continuous writing comes closest to the nature of human eye gaze. We propose Quikwriting, which was originally designed for handhelds, as a method for text input that meets the requirements of eye gaze controlled input best. We adapt its design for the usage with eye gaze. Based on the results of a first study, we formulate some guidelines for the design of future Quikwriting-based eye gaze controlled applications.
Abstract. While objects of our focus of attention ("where we are looking at") and accompanying affective responses to those objects is part of our daily experience, little research exists on investigating the relation between attention and positive affective evaluation. The purpose of our research is to process users' emotion and attention in real-time, with the goal of designing systems that may recognize a user's affective response to a particular visually presented stimulus in the presence of other stimuli, and respond accordingly. In this paper, we introduce the AutoSelect system that automatically detects a user's preference based on eye movement data and physiological signals in a two-alternative forced choice task. In an exploratory study involving the selection of neckties, the system could correctly classify subjects' choice of in 81%. In this instance of AutoSelect, the gaze 'cascade effect' played a dominant role, whereas pupil size could not be shown as a reliable predictor of preference.
In this paper, we focus on facial displays, eye gaze and head tilts to express social dominance. In particular, we are interested in the interaction of different non-verbal cues. We present a study which systematically varies eye gaze and head tilts for five basic emotions and a neutral state using our own graphics and animation engine. The resulting images are then presented to a large number of subjects via a web-based interface who are asked to attribute dominance values to the character shown in the images. First, we analyze how dominance ratings are influenced by the conveyed emotional facial expression. Further, we investigate how gaze direction and head pose influence dominance perception depending on the displayed emotional state.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.