Expanding the debate about empathy with human beings, animals, or fictional characters to include human-robot relationships, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking and claims that we also ascribe to robots something like a perspectival experience. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, anti-barbarizational, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.