With forty-six Action Units (AUs) forming the building blocks in the Facial Action Coding System (FACS), millions of facial configurations can be formed. Most research has focused on a subset of combinations to determine the link between facial configurations and emotions. Despite the value of this research for psychological and computational reasons, it is not clear what the most common combinations of AUs are to form the most commonly expressed facial configurations. We used three diverse corpora with human coded facial action units for a computational analysis. The analysis demonstrated that the largest portion of facial behavior consists of the absence of AU activations, yielding only one specific facial configuration, that of the neutral face. These results are important for cognitive scientists, computer graphics designers and virtual human developers alike. They suggest that only a relatively small number of AU combinations are initially needed for the creation of natural facial behavior in Embodied Conversational Agents (ECAs).
Human interlocutors automatically adapt verbal and non-verbal signals so that different behaviors become synchronized over time. Multimodal communication comes naturally to humans, while this is not the case for Embodied Conversational Agents (ECAs). Knowing which behavioral channels synchronize within and across speakers and how they align seems critical in the development of ECAs. Yet, there exists little data-driven research that provides guidelines for the synchronization of different channels within an interlocutor. This study focuses on intrapersonal dependencies of multimodal behavior by using cross-recurrence analysis on a multimodal communication dataset to better understand the temporal relationships between language and gestural behavior channels. By shedding light on the intrapersonal synchronization of communicative channels in humans, we provide an initial manual for modality synchronisation in ECAs. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; • Computing methodologies → Discourse, dialogue and pragmatics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.