Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior-making use of confounding factors within datasets-to achieve high performance.In this work, we introduce the novel learning setting of "explanatory interactive learning" (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop, such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model. 1Imagine a plant phenotyping team attempting to characterize crop resistance to plant pathogens. The plant physiologist records a large amount of hyperspectral imaging data. Impressed by the results of deep learning in other scientific areas, she wants to establish similar results for phenotyping. Consequently, she asks a machine learning expert to apply deep learning to analyze the data. Luckily, the resulting predictive accuracy is very high. The plant physiologist, however, remains skeptical. The results are "too good, to be true". Checking the decision process of the deep model using explainable artificial intelligence (AI), the machine learning expert is flabbergasted to find that the learned deep model uses clues within the data that do not relate to the biological problem at hand, so-called confounding factors.The physiologist loses trust in AI and turns away from it, proclaiming it to be useless. This example encapsulates a critical issue of current explainable AI [1, 2]. Indeed, the seminal paper of Lapuschkin et al.[3] helps in "unmasking Clever Hans predictors and assessing what machines really learn", however, rather than proclaiming, as the plant physiologist might, that the machines have learned the right predictions for the wrong reasons and can therefore not be trusted, we here showcase that interactions between the learning system and the human user can correct the model, towards making the right predictions for the right reasons [4]. This may also increase the trust in machine learning models. Actually, trust lies at the foundation of major theories of interpersonal relationships in psychology [5, 6], and we argue that interaction and understandability are central to trust in learning machines. Surprisingly, the link between interacting, explaining, and building trust has been largely ignored by the machine learning literature. Existing approaches focus on passive learning only and do not consider the interaction between the user and the learner [7,8,9], whereas, interactive learning frameworks such as active [10] and coactive learning [11] do not consider the issue of trust. In active learning, for instance, the model presents unlabeled instances to a user, and in exchange, obtains their label. This is completely opaque-the user is oblivious to the models beliefs and reasons for predictions...