2019
DOI: 10.1016/j.eswa.2019.03.012
|View full text |Cite
|
Sign up to set email alerts
|

Explanations by arbitrated argumentative dispute

Abstract: Explaining outputs determined algorithmically by machines is one of the most pressing and studied problems in Artificial Intelligence (AI) nowadays, but the equally pressing problem of using AI to explain outputs determined by humans is less studied. In this paper we advance a novel methodology integrating case-based reasoning and computational argumentation from AI to explain outcomes , determined by humans or by machines, indifferently, for cases characterised by discrete (static) features and/or (dynamic) s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
46
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 38 publications
(46 citation statements)
references
References 31 publications
0
46
0
Order By: Relevance
“…Argumentation is indeed well-suited for explainable reasoning [6,40,70] with argumentative explanations proposed in various settings, see e.g. [1][2][3]5,14,16,18,[21][22][23][24]26,28,31,32,[37][38][39]42,43,51,57,58,62,64,66,69,70,77,79,81,[84][85][86][87][88]90,93,101,103,104,[110][111][112]. We hope to exploit the well-established as well as novel ABA + mechanisms to our advantage of providing various explanations to accompany the decisions supported by ABA + G. In addition to several other future work directions mentioned in Sections 6 and 7, we will aim…”
Section: Discussionmentioning
confidence: 99%
“…Argumentation is indeed well-suited for explainable reasoning [6,40,70] with argumentative explanations proposed in various settings, see e.g. [1][2][3]5,14,16,18,[21][22][23][24]26,28,31,32,[37][38][39]42,43,51,57,58,62,64,66,69,70,77,79,81,[84][85][86][87][88]90,93,101,103,104,[110][111][112]. We hope to exploit the well-established as well as novel ABA + mechanisms to our advantage of providing various explanations to accompany the decisions supported by ABA + G. In addition to several other future work directions mentioned in Sections 6 and 7, we will aim…”
Section: Discussionmentioning
confidence: 99%
“…There is so far little work on argumentation for model-agnostic explanation of machine-learning algorithms but recent research suggests the feasibility of an argumentation approach. We are inspired by the work ofČyras et al [20,21], also applied by [18,19]. They define cases as sets of binary features plus a binary outcome (in [20] also a case's 'stages' are considered, but for present purposes these can be ignored).…”
Section: Introductionmentioning
confidence: 99%
“…We are inspired by the work ofČyras et al [20,21], also applied by [18,19]. They define cases as sets of binary features plus a binary outcome (in [20] also a case's 'stages' are considered, but for present purposes these can be ignored). Then they explain the outcome of a 'focus case' in terms of a graph structure that essentially utilises an argument game for grounded semantics of abstract argumentation semantics [24,32].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This term must not be confused with the term explanation which is the answer to a why-question or with the term justification which explains why a result is good, but does not necessarily aim to give an explanation of the process. Despite the numerous (formal and empirical) approaches [12,11,17,9] to tackle the problem of interpretability of artificial intelligent systems, it is still an open research problem. As highlighted by Mittelstadt et al [14], artificial argumentation [3] may play an important role in addressing this open issue, thanks to its inner feature of combining decision making with the pro and con arguments leading to a certain decision.…”
Section: Introductionmentioning
confidence: 99%