Empowerment is an information-theoretic measure representing the capacity of an agent to affect its environment. It quantifies its ability to inject information in the environment via its actions and to recapture this information through its sensors. In a nutshell, it measures the number of future options available and perceivable by the agent. Originally, the definition of empowerment does not depend on any particular extrinsic goal and it is determined only by the interaction of the agent with the world and the structure of its action-perception cycle. In this paper we introduce a new formalism that combines empowerment maximization with externally specifiable goaldirected behaviour. This has two main implications: on the one hand, the study of the relationship between empowerment optimization and goal-directedness, to investigate to which extent these two desirable behaviours can co-exist; on the other hand, from a more operational point of view, the derivation of a method to generate a behaviour (i.e., a policy of a Markov decision process) that is both empowered and goal-directed, in order to design agents capable of being as "empowered" as possible when facing any extrinsic task. Finally, we study how this hybrid policy is able to handle problems of uncertain or changing goals and delayed goal commitment.