Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior-making use of confounding factors within datasets-to achieve high performance.In this work, we introduce the novel learning setting of "explanatory interactive learning" (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop, such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model. 1Imagine a plant phenotyping team attempting to characterize crop resistance to plant pathogens. The plant physiologist records a large amount of hyperspectral imaging data. Impressed by the results of deep learning in other scientific areas, she wants to establish similar results for phenotyping. Consequently, she asks a machine learning expert to apply deep learning to analyze the data. Luckily, the resulting predictive accuracy is very high. The plant physiologist, however, remains skeptical. The results are "too good, to be true". Checking the decision process of the deep model using explainable artificial intelligence (AI), the machine learning expert is flabbergasted to find that the learned deep model uses clues within the data that do not relate to the biological problem at hand, so-called confounding factors.The physiologist loses trust in AI and turns away from it, proclaiming it to be useless. This example encapsulates a critical issue of current explainable AI [1, 2]. Indeed, the seminal paper of Lapuschkin et al.[3] helps in "unmasking Clever Hans predictors and assessing what machines really learn", however, rather than proclaiming, as the plant physiologist might, that the machines have learned the right predictions for the wrong reasons and can therefore not be trusted, we here showcase that interactions between the learning system and the human user can correct the model, towards making the right predictions for the right reasons [4]. This may also increase the trust in machine learning models. Actually, trust lies at the foundation of major theories of interpersonal relationships in psychology [5, 6], and we argue that interaction and understandability are central to trust in learning machines. Surprisingly, the link between interacting, explaining, and building trust has been largely ignored by the machine learning literature. Existing approaches focus on passive learning only and do not consider the interaction between the user and the learner [7,8,9], whereas, interactive learning frameworks such as active [10] and coactive learning [11] do not consider the issue of trust. In active learning, for instance, the model presents unlabeled instances to a user, and in exchange, obtains their label. This is completely opaque-the user is oblivious to the models beliefs and reasons for predictions...
Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior-making use of confounding factors within datasets-to achieve high performance. In this work we introduce the novel learning setting of explanatory interactive learning (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help avoiding Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model.
Explaining black-box models such as deep neural networks is becoming increasingly important as it helps to boost trust and debugging. Popular forms of explanations map the features to a vector indicating their individual importance to a decision on the instance-level. They can then be used to prevent the model from learning the wrong bias in data possibly due to ambiguity. For instance, Ross et al.'s ``right for the right reasons'' propagates user explanations backwards to the network by formulating differentiable constraints based on input gradients. Unfortunately, input gradients as well as many other widely used explanation methods form an approximation of the decision boundary and assume the underlying model to be fixed. Here, we demonstrate how to make use of influence functions---a well known robust statistic---in the constraints to correct the model’s behaviour more effectively. Our empirical evidence demonstrates that this ``right for better reasons''(RBR) considerably reduces the time to correct the classifier at training time and boosts the quality of explanations at inference time compared to input gradients. Besides, we also showcase the effectiveness of RBR in correcting "Clever Hans"-like behaviour in real, high-dimensional domain.
Deep learning is bringing remarkable contributions to the field of argumentation mining, but the existing approaches still need to fill the gap towards performing advanced reasoning tasks. We illustrate how neural-symbolic and statistical relational learning could play a crucial role in the integration of symbolic and sub-symbolic methods to achieve this goal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.