In this work, we present an argumentation-based formalization for supporting the process of formation of intentions in practical agents. This is based on the belief-based goal processing model proposed by Castelfranchi and Paglieri, which is a more expressive and refined model than the BDI (Beliefs-Desires-Intentions) model. We focus on the progress of goals since they are desires until they become intentions, including the conditions under which a goal can be cancelled. We use argumentation to support the transition of the goals from their initial state until the last one. Our proposal complies with both supporting relation properties defined by Castelfranchi and Paglieri, diachrony and synchrony. The former means that the support happens since the goal is a desire until it becomes an intention, and the latter that the support can be tracked, i.e. there is a memory of the cognitive path from the beginning of the process until the end. In this work, we present an argumentation-based formalization for supporting the process of formation of intentions in practical agents. This is based on the belief-based goal processing model proposed by Castelfranchi and Paglieri, which is a more expressive and refined model than the BDI (Beliefs-Desires-Intentions) model. We focus on the progress of goals since they are desires until they become intentions, including the conditions under which a goal can be cancelled. We use argumentation to support the transition of the goals from their initial state until the last one. Our proposal complies with both supporting relation properties defined by Castelfranchi and Paglieri, diachrony and synchrony. The former means that the support happens since the goal is a desire until it becomes an intention, and the latter that the support can be tracked, i.e. there is a memory of the cognitive path from the beginning of the process until the end.
During the first step of practical reasoning, i.e. deliberation, an intelligent agent generates a set of pursuable goals and then selects which of them he commits to achieve. An intelligent agent may in general generate multiple pursuable goals, which may be incompatible among them. In this paper, we focus on the definition, identification and resolution of these incompatibilities. The suggested approach considers the three forms of incompatibility introduced by Castelfranchi and Paglieri, namely the terminal incompatibility, the instrumental or resources incompatibility and the superfluity. We characterise computationally these forms of incompatibility by means of arguments that represent the plans that allow an agent to achieve his goals. Thus, the incompatibility among goals is defined based on the conflicts among their plans, which are represented by means of attacks in an argumentation framework. We also work on the problem of goals selection; we propose to use abstract argumentation theory to deal with this problem, i.e. by applying argumentation semantics. We use a modified version of the "cleaner world" scenario in order to illustrate the performance of our proposal.
Some abstract argumentation approaches consider that arguments have a degree of uncertainty, which impacts on the degree of uncertainty of the extensions obtained from a abstract argumentation framework (AAF) under a semantics. In these approaches, both the uncertainty of the arguments and of the extensions are modeled by means of precise probability values. However, in many real life situations the exact probabilities values are unknown and sometimes there is a need for aggregating the probability values of different sources. In this paper, we tackle the problem of calculating the degree of uncertainty of the extensions considering that the probability values of the arguments are imprecise. We use credal sets to model the uncertainty values of arguments and from these credal sets, we calculate the lower and upper bounds of the extensions. We study some properties of the suggested approach and illustrate it with an scenario of decision making.
Explainable Artificial Intelligence (XAI) systems, including intelligent agents, must be able to explain their internal decisions, behaviours and reasoning that produce their choices to the humans (or other systems) with which they interact. In this paper, we focus on how an extended model of BDI (Beliefs-Desires-Intentions) agents can be able to generate explanations about their reasoning, specifically, about the goals he decides to commit to. Our proposal is based on argumentation theory, we use arguments to represent the reasons that lead an agent to make a decision and use argumentation semantics to determine acceptable arguments (reasons). We propose two types of explanations: the partial one and the complete one. We apply our proposal to a scenario of rescue robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.