In order to improve the social capabilities of embodied conversational agents, we propose a computational model to enable agents to automatically select and display appropriate smiling behavior during humanmachine interaction. A smile may convey different communicative intentions depending on subtle characteristics of the facial expression and contextual cues. So, to construct such a model, as a first step, we explore the morphological and dynamic characteristics of different types of smile (polite, amused and embarrassed smiles) that an embodied conversational agent may display. The resulting lexicon of smiles is based on a corpus of virtual agent's smiles directly created by users and analyzed through a machine learning technique. Moreover, during an interaction, the expression of smile impacts on the observer's perception of the interpersonal stance of the speaker. As a second step, we propose a probabilistic model to automatically compute the user's potential perception of the embodied conversational agent's social stance depending on its smiling behavior and on its physical appearance. This model, based on a corpus of users' perception of smiling and non-smiling virtual agents, enables a virtual agent to determine the appropriate smiling behavior to adopt given the interpersonal stance it wants to express. An experiment using real human-virtual agent interaction provided some validation of the proposed model.