This paper describes the results of a research project aimed at implementing a 'realistic' 3D Embodied Agent that can be animated in real-time and is 'believable and expressive': that is, able to communicate with coherency complex information, through the combination and the tight synchronisation of verbal and nonverbal signals. We describe, in particular, how we `animate' this Agent (that we called Greta) so as to enable her to manifest the affective states that are dynamically activated and de-activated in her mind during the dialog with the user. The system is made up of three tightly interrelated components: -a representation of the Agent Mind: this includes long and short-term affective components (personality and emotions) and simulates how emotions are triggered and decay over time according to the Agent's personality and to the context and how several emotions may overlap. Dynamic belief networks with weighting of goals is the formalism we employ to this purpose; -a mark-up language to denote the communicative meanings that may be associated with dialog moves performed by the Agent; -a translation of the Agent's tagged move into a face expression, that combines appropriately the available channels (gaze direction, eyebrow shape, head direction and movement etc). The final output is a 3-D facial model that respects the MPEG-4 standard and uses MPEG-4 Facial Animation Parameters to produce facial expressions. Throughout the paper, we illustrate the results obtained, with an example of dialog in the domain of 'Advice about eating disorders'. The paper concludes with an analysis of advantages of our cognitive model of emotion triggering and of the problems found in testing it. Although we did not yet complete a formal evaluation of our system, we briefly describe how we plan to assess the agent's believability in terms of consistency of its communicative behavior.