Abstract. In a multi-agent system, agents must decide what to do and by what order. Autonomy is a key notion in such a system, since it is mainly the autonomy of the agents that makes the environment unpredictable and complex. From a user standpoint, autonomy is equally important as an ingredient that has to be used with parsimony: too much leads to agents fulfilling their own goals instead of those of the user, too little renders agents that are too dependent upon user commands and choices for their execution. Autonomy has a role in deciding which are the new goals of the agent, and it has another in choosing which of the agent's goals are going to be addressed next. We have proposed the BVG (Beliefs, Values, Goals) architecture with the idea of making decisions using multiple evaluations of a situation, taking the notion of value as central in the motivational mechanisms in the agent's mind. The agent will consider the several evaluations and decide in accordance with its goals in a rational fashion. In this paper we extend this architecture in three different directions: we consider the source of agent's goals, we enhance the decisional mechanisms to consider a wider range of situations, and we introduce emotion as a meta-level control mechanism of the decision processes.