The field of computational models of argument is emerging as an important aspect of artificial intelligence research. The reason for this is based on the recognition that if we are to develop robust intelligent systems, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task. And one of the key ways that humans do this is to use argumentation -either internally, by evaluating arguments and counterarguments -or externally, by for instance entering into a discussion or debate where arguments are exchanged. As we report in this review, recent developments in the field are leading to technology for artificial argumentation, in the legal, medical, and e-government domains, and interesting tools for argument mining, for debating technologies, and for argumentation solvers are emerging.
This paper introduces epistemic graphs as a generalization of the epistemic approach to probabilistic argumentation. In these graphs, an argument can be believed or disbelieved up to a given degree, thus providing a more fine-grained alternative to the standard Dung's approaches when it comes to determining the status of a given argument. Furthermore, the flexibility of the epistemic approach allows us to both model the rationale behind the existing semantics as well as completely deviate from them when required. Epistemic graphs can model both attack and support as well as relations that are neither support nor attack. The way other arguments influence a given argument is expressed by the epistemic constraints that can restrict the belief we have in an argument with a varying degree of specificity. The fact that we can specify the rules under which arguments should be evaluated and we can include constraints between unrelated arguments permits the framework to be more context-sensitive. It also allows for better modelling of imperfect agents, which can be important in multi-agent applications.Example 3 (Adapted from [23,24]). The work in [23] has investigated the problem of reinstatement in argumentation using an instantiated theory and preferences. We draw attention to two scenarios considered in the study, concerning weather forecast and car purchase, where each comes in the basic (without the last sentence) and extended (full text) version.The weather forecasting service of the broadcasting company AAA says that it will rain tomorrow. Meanwhile, the forecast service of the broadcasting company BBB says that it will be cloudy tomorrow but that it will not rain. It is also well known that the forecasting service of BBB is more accurate than the one of AAA. However, yesterday the trustworthy newspaper CCC published an article which said that BBB has cut the resources for its weather forecasting service in the past months, thus making it less reliable than in the past.You are planning to buy a second-hand car, and you go to a dealership with BBB, a mechanic whom has been recommended you by a friend. The salesperson AAA shows you a car and says that it needs very little work done to it. BBB says it will require quite a lot of work, because in the past he had to fix several issues in a car of the same model. While you are at the dealership, your friend calls you to tell you that he knows (beyond a shadow of a doubt) that BBB made unnecessary repairs to his car last month.The formal representation of the base (resp. extended) versions of these scenarios is equivalent (we refer to [23,24] for more details). However, the findings show that they are not judged in the same way and suggest that the domain dependent knowledge of the participants has affected their performance of the tasks. This shows the importance of modelling context-sensitivity and allowing an agent to evaluate structurally equivalent graphs differently.
argumentation offers an appealing way of representing and evaluating arguments and counterarguments. This approach can be enhanced by considering probability assignments on arguments, allowing for a quantitative treatment of formal argumentation. In this paper, we regard the assignment as denoting the degree of belief that an agent has in an argument being acceptable. While there are various interpretations of this, an example is how it could be applied to a deductive argument. Here, the degree of belief that an agent has in an argument being acceptable is a combination of the degree to which it believes the premises, the claim, and the derivation of the claim from the premises. We consider constraints on these probability assignments, inspired by crisp notions from classical abstract argumentation frameworks and discuss the issue of probabilistic reasoning with abstract argumentation frameworks. Moreover, we consider the scenario when assessments on the probabilities of a subset of the arguments are given and the probabilities of the remaining arguments have to be derived, taking both the topology of the argumentation framework and principles of probabilistic reasoning into account. We generalise this scenario by also considering inconsistent assessments, i.e., assessments that contradict the topology of the argumentation framework. Building on approaches to inconsistency measurement, we present a general framework to measure the amount of conflict of these assessments and provide a method for inconsistency-tolerant reasoning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.