Depending on their application, Embodied Conversational Agents (ECAs) must be able to express various affects or social constructs such as emotions or social attitudes. Non-verbal signals, such as smiles or gestures, contribute to the expression of attitudes. Social attitudes affect the whole behavior of a person: as Scherer puts it, they are "characteristic of an affective style that colors the entire interaction" [1]. Moreover, recent findings have demonstrated that non-verbal signals are not interpretated in isolation but along with surrounding signals: for instance, a smile followed by a gaze aversion and a head aversion may signal embarassment rather than amusement [2]. Non-verbal behavior planning models designed to allow ECAs to express attitudes should thus consider complete sequences of non-verbal signals and not only signals independently of one another. However, existing models do not take this into account, or in a limited manner. The contribution of this paper is a methodology for the automatic extraction of sequences of non-verbal signals characteristic of a social phenomenon from a multimodal corpus, and a non-verbal behavior planning model that takes into account sequences of non-verbal signals rather than signals independently. This methodology is applied to design a virtual recruiter capable of expressing social attitudes, which is then evaluated in and out of an interaction context.