In this paper we proposed a computational model that automatically integrates a knowledge base with an affective model. The knowledge base presented as a semantic model, is used for an accurate definition of the emotional interaction of a virtual character and their environment. The affective model generates emotional states from the emotional output of the knowledge base. Visualization of emotional states is done through facial expressions automatically created using the MPEG-4 standard. In order to test the model, we designed a story that provides the events, preferences, goals, and agent's interaction, used as input for the model. As a result the emotional states obtained as output were totally coherent with the input of the model. Then, the facial expressions representing these states were evaluated by a group of persons from different academic backgrounds, proving that emotional states can be recognized in the face of the virtual character.