2018
DOI: 10.1002/isaf.1422
|View full text |Cite
|
Sign up to set email alerts
|

Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges

Abstract: Summary Recent rapid progress in machine learning (ML), particularly so‐called ‘deep learning’, has led to a resurgence in interest in explainability of artificial intelligence (AI) systems, reviving an area of research dating back to the 1970s. The aim of this article is to view current issues concerning ML‐based AI systems from the perspective of classical AI, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
71
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 118 publications
(71 citation statements)
references
References 33 publications
0
71
0
Order By: Relevance
“…As resource-granting stakeholders seek an understanding of how and why people are being influenced and are affected by these AI-made predictions and the resulting machine behavior or decision making, the stakeholders make an assessment as to whether they are meaningful in the context of the prevalent beliefs, logics, and categories (Suchman, 1995). Considering the "black box" nature of many AI models, which makes it difficult, if not impossible, for humans to understand exactly how machine learning algorithms make predictions and arrive at certain decisions, recommendations, or behaviors (Coglianese & Lehr, 2019), making such predictions explainable is extremely difficult in some cases (Mayenberger, 2019;Preece, 2018). However, only if the explainability of AI-made predictions is achieved can stakeholders assess the meaningfulness of these predictions and renew their trust and commitment to grant the critical resources that help ensure a strong relationship between platform AI capability and perceived user value (Rossi, 2018).…”
Section: Personal Data Usementioning
confidence: 99%
“…As resource-granting stakeholders seek an understanding of how and why people are being influenced and are affected by these AI-made predictions and the resulting machine behavior or decision making, the stakeholders make an assessment as to whether they are meaningful in the context of the prevalent beliefs, logics, and categories (Suchman, 1995). Considering the "black box" nature of many AI models, which makes it difficult, if not impossible, for humans to understand exactly how machine learning algorithms make predictions and arrive at certain decisions, recommendations, or behaviors (Coglianese & Lehr, 2019), making such predictions explainable is extremely difficult in some cases (Mayenberger, 2019;Preece, 2018). However, only if the explainability of AI-made predictions is achieved can stakeholders assess the meaningfulness of these predictions and renew their trust and commitment to grant the critical resources that help ensure a strong relationship between platform AI capability and perceived user value (Rossi, 2018).…”
Section: Personal Data Usementioning
confidence: 99%
“…Among the vast amount of existing machine learning techniques that are presently available, this study focuses on genetic programming for several reasons. First of all, there is increasing interest in the scientific literature concerning the interpretability of artificial intelligence (AI) models [27]. This interest ranges from the business domain [28] to the health-care sector [29].…”
Section: Challenges Of Electricity Price Forecastingmentioning
confidence: 99%
“…Local explanation focusses on justifying a single decision or output, whilst global explanation focusses on overall system behaviour (Hoffman et al 2018;Weller 2017;Doshi-Velez and Kim 2017;Ribeiro et al 2016). XAI is often described in terms of transparency and post-hoc explanation (Preece et al 2018;Mittelstadt et al 2019;Lipton 2016;Preece 2018). Transparency seeks to reveal information about the internal structure of a model and its training data in order to communicate how the system reaches an output (Tomsett et al 2018.…”
Section: Introductionmentioning
confidence: 99%