Tailoring the linguistic content of automatically generated descriptions to the preferences of a target user has been well demonstrated to be an effective way to produce higher-quality output that may even have a greater impact on user behaviour. It is known that the non-verbal behaviour of an embodied agent can have a significant effect on users' responses to content presented by that agent. However, to date no-one has examined the contribution of non-verbal behaviour to the effectiveness of user tailoring in automatically generated embodied output. We describe a series of experiments designed to address this question. We begin by introducing a multimodal dialogue system designed to generate descriptions and comparisons tailored to user preferences, and demonstrate that the user-preference tailoring is detectable to an overhearer when the output is presented as synthesised speech. We then present a multimodal corpus consisting of the annotated facial expressions used by a speaker to accompany the generated tailored descriptions, and verify that the most characteristic positive and negative expressions used by that speaker are identifiable when resynthesised on an artificial talking head. Finally, we combine the corpus-derived facial displays with the tailored descriptions to test whether the addition of the non-verbal channel improves users' ability to detect the intended tailoring, comparing two strategies for selecting the displays: one based on a simple corpus-derived rule, and one This article integrates and extends the work described in Foster and White (2005) and Foster (2007a,b).
M. E. Foster (B)123 342 M. E. Foster, J. Oberlander making direct use of the full corpus data. The performance of the subjects who saw displays selected by the rule-based strategy was not significantly different than that of the subjects who got only the linguistic content, while the subjects who saw the data-driven displays were significantly worse at detecting the correctly tailored output. We propose a possible explanation for this result, and also make recommendations for developers of future systems that may make use of an embodied agent to present user-tailored content.