The purpose of this chapter is to examine the existent and potential contribution of argumentation theory to decision-aiding, more specifically to multi-criteria decision-aiding. On the one hand, Decision aiding provides a general framework that can be adapted to different contexts of decision-making and a formal theory about preferences. On the other hand Argumentation theory is growing field of Artificial Intelligence, which is interested in non monotonic logics. It is the process of collecting arguments in order to justify and explain conclusions. The chapter is decomposed in three successive frames, starting from general considerations regarding decision theory and Artificial Intelligence, moving on to the specific contribution of argumentation to decision-support systems, to finally focus on multi-criteria decision-aiding.
Decision aiding can be abstractly described as the process of assisting a user/client/decision maker by recommending possible courses of his action. This process has to be able to cope with incomplete and/or inconsistent information and must adapt to the dynamics of the environment in which it is carried out. Indeed, on the one hand, complete information about the environment is almost impossible, and on the other hand, the information provided by the user is often affected by uncertainty; it may contains inconsistencies and may dynamically be revised because of various reasons. The aim of this paper is to present a model of the decision aiding process that is amenable to automation. The main features of the approach is that it models decision aiding as an iterative defeasible reasoning process, and it uses argumentation for capturing important aspects of the process. More specifically, argumentation is used for representing the relations between the cognitive artefacts that are involved in decision aiding, as well as for modelling the artefacts themselves. In modelling the cognitive artefacts, we make use of the notion of argument schemes and specify the related critical questions. More specifically, the work reported here aims at initiating a systematic study of the use of argumentation in future decision aiding tools. Our ambition is twofold: (i) enhance decision support capabilities of an analyst representing explicitly and accountably the reasons for which he recommend a solution for a decision maker and (ii) enhance decision support capabilities of an (semi) automatic device to handle (at least partially) the dialogue with the user.
Among challenges for eXplainable Artificial Intelligence (XAI) is explanation generation. In this paper we put the stress on this issue by focusing on a semantic representation of the content of an explanation that could be common to any kind of XAI. We investigate knowledge representations, and discuss the benefits of conceptual graph structures for being a basis to represent explanations in AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.