2019
DOI: 10.1016/j.artint.2018.07.007
|View full text |Cite
|
Sign up to set email alerts
|

Explanation in artificial intelligence: Insights from the social sciences

Abstract: There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

37
2,649
3
35

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 3,260 publications
(2,724 citation statements)
references
References 143 publications
(295 reference statements)
37
2,649
3
35
Order By: Relevance
“…First we provide some definitions to explain what kind of explainability we mean—this will lead us to the term “Causability” in contrast to the well‐known term “Causality”; then we discuss briefly the state‐of‐the‐art of some current explainable models, and continue with an example and a medical use‐case from histopathology. We conclude with pointing to the urgent need of a systems causability scale to measure the quality of an explanation (Hoffman, Mueller, Klein, & Litman, ), which must also include social aspects of human communication (Miller, ).…”
Section: Introductionmentioning
confidence: 99%
“…First we provide some definitions to explain what kind of explainability we mean—this will lead us to the term “Causability” in contrast to the well‐known term “Causality”; then we discuss briefly the state‐of‐the‐art of some current explainable models, and continue with an example and a medical use‐case from histopathology. We conclude with pointing to the urgent need of a systems causability scale to measure the quality of an explanation (Hoffman, Mueller, Klein, & Litman, ), which must also include social aspects of human communication (Miller, ).…”
Section: Introductionmentioning
confidence: 99%
“…A major shortcoming of ML methods in general, and of DL methods in particular, is that the learned relations are hidden under very complicated prediction functions. However, recent years have seen the emergence of a whole field of ML called 'Explainable Artificial Intelligence' to face this issue (Miller 2019), and techniques and methodologies have been introduced to study what the ML/DL models are learning (Montavon et al 2018). In this work, we focus on an efficient method to explain the model predictions called regression activation mapping (RAM) (Zhou et al 2016, Wang and Yang 2017.…”
Section: Introductionmentioning
confidence: 99%
“…AI explanation is necessary to ensure trust (Lyons, ; Theodorou, Wortham, & Bryson, ) and is required by EU data protection laws. Explanation should be grounded in moral and social concepts, including values, social norms and relationships, commitments, habits, motives and goals (Miller, ).…”
Section: Ethical Pas: Design For Valuesmentioning
confidence: 99%