Model-agnostic methods in (XAI) propose isolating the explanation system from the AI model architecture, typically Machine Learning or black-box models. Existing XAI libraries offer a good number of explanation methods, that are reusable for different domains and models, with different choices of parameters. However, it is not clear what would be a good explainer for a given situation, domain, AI model, and user preferences. The choice of a proper explanation method is a complex decision-making process itself. In this paper, we propose applying CBR to support this task by capturing the user preferences about explanation results into a case base. We have defined the corresponding CBR process to help retrieve a suitable explainer from a catalogue made of existing XAI libraries. CBR could help the task of learning from the explanation experiences and will help to retrieve explainers for other similar scenarios.
With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users’ trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful.
Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation’s goal, the user’s expectation, the knowledge available, or the presentation method. Therefore, this work proposes a conceptual model to alleviate this problem by defining the requirements of explanations for recommender systems. Our goal is to provide a model that guides the development of effective explanations for recommender systems as they are correctly designed and suited to the user’s needs. Although earlier explanation taxonomies sustain this work, our model includes new concepts not considered in previous works. Moreover, we make a novel contribution regarding the formalization of this model as an ontology that can be integrated into the development of proper explanations for recommender systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.