In recent years, explanations have become a pressing matter in AI research. This development was caused by the increased use of black-box models and a realization of the importance of trustworthy AI. In particular, explanations are necessary for human–agent interactions to ensure that the user can trust the agent and that collaborations are effective. Human–agent interactions are complex social scenarios involving a user, an autonomous agent, and an environment or task with its own distinct properties. Thus, such interactions require a wide variety of explanations, which are not covered by the methods of a single AI discipline, such as computer vision or natural language processing. In this paper, we map out what types of explanations are important for human–agent interactions, surveying the field via a scoping review. In addition to the typical introspective explanation tackled by explainability researchers, we look at assistive explanations, aiming to support the user with their task. Secondly, we survey what causes the need for an explanation in the first place. We identify a variety of human–agent interaction-specific causes and categorize them by whether they are centered on the agent’s behavior, the user’s mental state, or an external entity. Our overview aims to guide robotics practitioners in designing agents with more comprehensive explanation-related capacities, considering different explanation types and the concrete times when explanations should be given.