A significant problem of recommender systems is their inability to explain recommendations, resulting in turn in ineffective feedback from users and the inability to adapt to users’ preferences. We propose a hybrid method for calculating predicted ratings, built upon an item/aspect-based graph with users’ partially given ratings, that can be naturally used to provide explanations for recommendations, extracted from user-tailored Tripolar Argumentation Frameworks (TFs). We show that our method can be understood as a gradual semantics for TFs, exhibiting a desirable, albeit weak, property of balance. We also show experimentally that our method is competitive in generating correct predictions, compared with state-of-the-art methods, and illustrate how users can interact with the generated explanations to improve quality of recommendations.
The study of properties of gradual evaluation methods in argumentation has received increasing attention in recent years, with studies devoted to various classes of frameworks/methods leading to conceptually similar but formally distinct properties in different contexts. In this paper we provide a novel systematic analysis for this research landscape by making three main contributions. First, we identify groups of conceptually related properties in the literature, which can be regarded as based on common patterns and, using these patterns, we evidence that many further novel properties can be considered. Then, we provide a simplifying and unifying perspective for these groups of properties by showing that they are all implied by novel parametric principles of (either strict or non-strict) balance and monotonicity. Finally, we show that (instances of) these principles (and thus the group, literature and novel properties that they imply) are satisfied by several quantitative argumentation formalisms in the literature, thus confirming the principles' general validity and utility to support a compact, yet comprehensive, analysis of properties of gradual argumentation.
Opinion polls are used in a variety of settings to assess the opinions of a population, but they mostly hide the reasoning behind these opinions. Argumentation, as understood in AI, can be used to evaluate opinions in dialectical exchanges, transparently articulating the reasoning behind the opinions. We give a method integrating argumentation within opinion polling to empower voters to add new statements that render their opinions in the polls individually rational while at the same time justifying them. We then show how these poll results can be amalgamated to give a collectively rational set of voters in an argumentation framework. Our method relies upon Quantitative Argumentation Debate for Voting (QuAD-V) frameworks, which extend QuAD frameworks (a form of bipolar argumentation frameworks in which arguments have an intrinsic strength) with votes expressing individuals' opinions on arguments.
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use. We also lay out a roadmap for future work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.