Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI-forming abstractions, and learning about the relational and causal structure of the world. Here, we explore whether reinforcement learning agents might likewise benefit from explanations. We outline a family of relational tasks that involve selecting an object that is the odd one out in a set (i.e., unique along one of many possible feature dimensions). Odd-one-out tasks require agents to reason over multi-dimensional relationships among a set of objects. We show that agents do not learn these tasks well from reward alone, but achieve > 90% performance when they are also trained to generate language explaining object properties or why a choice is correct or incorrect. In further experiments, we show how predicting explanations enables agents to generalize appropriately from ambiguous, causally-confounded training, and even to meta-learn to perform experimental interventions to identify causal structure. We show that explanations help overcome the tendency of agents to fixate on simple features, and explore which aspects of explanations make them most beneficial. Our results suggest that learning from explanations is a powerful principle that could offer a promising path towards training more robust and general machine learning systems.Explanations-language that provides explicit information about the abstract, causal structure of the world-are central to human learning (Keil et al., 2000;Lombrozo, 2006). Explanations help solve the credit assignment problem, because they link a concrete situation to generalizable abstractions that can be used in the future (Lombrozo, 2006;Lombrozo and Carey, 2006). Thus explanations allow us to learn efficiently, from otherwise underspecified examples (Ahn et al., 1992). Human explanations selectively highlight generalizable causal factors and thereby improve our causal understanding (Lombrozo and Carey, 2006). Similarly, they help us to make comparisons and master re-