The recent success of deep neural networks has generated remarkable growth in Artificial Intelligence (AI) research and has received much interest over the past few years. One of the main challenges for the broad adoption of deep learning-based models such as Convolutional Neural Networks (CNN) is the lack of understanding of their decisions. To address this issue, Explainable Artificial Intelligence (XAI) has been proposed to shift toward more transparent AI, resulting in the development of techniques to explain decisions by AI models. This paper aims to explore and develop a multi-scale scheme of LIME (Local Interpretable Model-Agnostic Explanations) applied to image classification to explain decisions made by CNN models through heatmaps of coarse to finer scales. More precisely, when LIME highlights large superpixels from a coarse scale, there may be smaller regions in the corresponding superpixel that influenced the model’s prediction at some finer scale. In the proposed multi-scale scheme, two weighting approaches, one based on Gaussian distribution and another parameter-free framework will be introduced to produce visual explanations observed from different scales. Promising results for multi-scale classification heatmaps of histopathology images are presented. More specifically, we investigated the proposed multi-scale approach on Camelyon16 dataset. The results show that the explanations are faithful to the underlying model, and the visualizations are reasonably interpretable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.