2021
DOI: 10.1007/978-3-030-81907-1_11
|View full text |Cite
|
Sign up to set email alerts
|

The Explanation Game: A Formal Framework for Interpretable Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 80 publications
0
7
0
Order By: Relevance
“…In emphasizing that the opacity of such models is not per se a threat to their ability to contribute to explanation and understanding, I do not mean to deny that there are often good reasons to seek transparency. In particular, methods of explainable AI might be used in the process of model validation and in using such models for discovery (Watson and Floridi, 2021;Watson, 2022b).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In emphasizing that the opacity of such models is not per se a threat to their ability to contribute to explanation and understanding, I do not mean to deny that there are often good reasons to seek transparency. In particular, methods of explainable AI might be used in the process of model validation and in using such models for discovery (Watson and Floridi, 2021;Watson, 2022b).…”
Section: Discussionmentioning
confidence: 99%
“…This is the goal of the "explainable AI" (xAI) movement, which has received considerable attention (see e.g. Watson and Floridi, 2021;Watson, 2022a;Zednik and Boelsen, 2022;Zerilli, 2022;Beisbart and Räz, 2022;Räz, 2022a), but also generated skeptical reactions (e.g. Rudin, 2019).…”
Section: Machine Learning and The End Of Theorymentioning
confidence: 99%
“…As opposed to many works on this topic (but similar to Kasirzadeh et al (2023), Zenil & Bringsjord (2020 among others), this article stresses the important peculiarities of DNNs within ML methods. The present approach has some similarities with the one adopted in Watson and Floridi (2021). Some of the most important differences are: (a) the present focus is on reliability, that naturally leads to consider global, rather than local interpretability; (b) for the same reason, the relevance defined in Watson and Floridi (2021) is less applicable and it is not considered here; (c) finally, Watson and Floridi (2021) consider a subjective notion of simplicity, while we focus on the hardcore complexity that no human can reduce, regardless of language and individual skills.…”
Section: Introductionmentioning
confidence: 89%
“…Firstly, existing decision modelling and machine learning on irrational behaviour have not taken into account the corresponding decision processes [ 12 ]. While prior research on decision models has simulated the irrational behaviour of individuals, they take the actual outcome of the irrational behaviour as a starting point for modelling and attempt to propose methods to detect, prevent, and/or alleviate undesired bias [ 12 , 13 ]. This approach is unable to describe and model the irrational decision-making process of individuals.…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, they gather as much information as possible and magnify the likelihood of some small probabilities in order to make relatively rational decisions. In this case, some information with a low or medium level of authority will not be able to guide individual behaviour [ 12 , 13 ]. Second, different from other studies on public opinion control in emergencies, this research focuses on a specific type of information in emergencies, that is, the information on individual decision-making and the corresponding actions.…”
Section: Introductionmentioning
confidence: 99%