2018
DOI: 10.48550/arxiv.1806.00069
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explaining Explanations: An Overview of Interpretability of Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
89
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(89 citation statements)
references
References 0 publications
0
89
0
Order By: Relevance
“…Measuring interpretation desiderata. Currently, there is no clear consensus in the community around how to evaluate interpretation methods, although some recent work has begun to address it (11)(12)(13). As a result, the standard of evaluation varies considerably across different work, making it challenging both for researchers in the field to measure progress, and for prospective users to select suitable methods.…”
Section: Future Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Measuring interpretation desiderata. Currently, there is no clear consensus in the community around how to evaluate interpretation methods, although some recent work has begun to address it (11)(12)(13). As a result, the standard of evaluation varies considerably across different work, making it challenging both for researchers in the field to measure progress, and for prospective users to select suitable methods.…”
Section: Future Workmentioning
confidence: 99%
“…One line of work focuses on providing an overview of different interpretation methods with a strong emphasis on post hoc interpretations of deep learning models (7,8), sometimes pointing out similarities between various methods (9,10). Other work has focused on the narrower problem of how interpretations should be evaluated (11,12) and what properties they should satisfy (13). These previous works touch on different subsets of interpretability, but do not address interpretable machine learning as a whole, and give limited guidance on how interpretability can actually be used in data-science life cycles.…”
Section: Introductionmentioning
confidence: 99%
“…Several researchers have proposed comprehensive definitions of explanations [22,23,7,24] and have presented explanation components that they deem necessary to satisfy either their work or the domains where they hope the explanations will be useful. However, with a shift of focus in AI we feel the need to revisit the work on defining explanation as we consider what is desirable in next-generation "explainable knowledge-enabled systems."…”
Section: Terminologymentioning
confidence: 99%
“…To begin to address the need of building explainable, knowledge-enabled AI systems, we present a list of desirable properties from the synthesis of our literature review of past explanation work. Our review primarily spans knowledge representation in expert systems [22], provenance and reasoning efforts in the Semantic Web [18], user task-processing workflows in cognitive assistants [7,35], and efforts to reduce unintelligibility in the ML domain [9,24,21]. Additionally, we analyzed explanation requirements from current literature, answering an increased need for usercomprehensibility [36], accountability [32] and user-focus [19].…”
Section: Explainable Knowledge-enabled Systemsmentioning
confidence: 99%
See 1 more Smart Citation