Intelligent agents (IAs) that use machine learning for decision-making often lack the explainability about what they are going to do, which makes human-IA collaboration challenging. However, previous methods of explaining IA behavior require IA developers to predefine vocabulary that expresses motion, which is problematic as IA decision-making becomes complex. This paper proposes Manifestor, a method for explaining an IA's future motion with autonomous vocabulary learning. With Manifestor, an IA can learn vocabulary from a person's instructions about how the IA should act. A notable contribution of this paper is that we formalized the communication gap between a person and IA in the vocabulary-learning phase, that is, the IA's goal may be different from what the person wants the IA to achieve, and the IA needs to infer the latter to judge whether a motion matches that person's instruction. We evaluated Manifestor by investigating whether people can accurately predict an IA's future motion with explanations generated with Manifestor. We compared Manifestor's vocabulary with that from optimal acquired in a situation in which the communication-gap problem did not exist and that from ablation, which was learned with a false assumption that an IA and person shared a goal. The experimental results revealed that vocabulary learned with Manifestor improved people's prediction accuracy as much as with optimal, while ablation failed, suggesting that Manifestor can enable an IA to properly learn vocabulary from people's instructions even if a communication gap exists.INDEX TERMS Explainable AI, Human-agent interaction, Intelligent agent, Deep reinforcement learning.