One of the main challenges for conversational agents is to select the optimal dialogue policy based on the state of the interaction. This challenge becomes even harder when the conversational agent not only has to achieve a specific task, but also aims at building rapport. Although some work already tried to tackle this challenge using a Reinforcement Learning (RL) approach, they tend to consider one single optimal policy for all the users, regardless of their conversational goals. In this work, we describe a framework that allows us to build a RL-based agent able to adapt its dialogue policy depending on its user's conversational goals. After we build a rulebased agent and a user simulator communicating at the dialog-act level, we crowdsource the surface sentences authoring for both the simulated users and the agent, which allow us to generate a dataset of interactions in natural language. Then, we annotate each of these interactions with a single rapport score and analyze the links between simulated users' conversational goals, agent conversational policies, and rapport. Our results show that rapport was higher when both or none of the interlocutors tried to build rapport. We use this result to inform the design of a social reward function, and we rely on this social reward function to train a RLbased agent using an hybrid approach of supervised learning and reinforcement learning. We evaluate our approach by comparing two different versions of our RL-based agent: one that takes users' conversational goals into account and another that does not. The results show that an agent adapting its dialogue policy depending on users' conversational goals performs better.