2020
DOI: 10.1145/3392878
|View full text |Cite
|
Sign up to set email alerts
|

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

Abstract: As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 138 publications
(66 citation statements)
references
References 39 publications
0
66
0
Order By: Relevance
“…They argue that explainability helps designers enhance correctness, identify improvements in training data, account for changing realities, support users in taking control, and increase user acceptance. An interview study with 22 machine learning professionals documented the value of explainability for developers, testers, managers, and users [52]. However, explainability methods are only slowly finding their way into widely used applications and possibly in ways that are different from the research.…”
Section: Explainable User Interfacesmentioning
confidence: 99%
“…They argue that explainability helps designers enhance correctness, identify improvements in training data, account for changing realities, support users in taking control, and increase user acceptance. An interview study with 22 machine learning professionals documented the value of explainability for developers, testers, managers, and users [52]. However, explainability methods are only slowly finding their way into widely used applications and possibly in ways that are different from the research.…”
Section: Explainable User Interfacesmentioning
confidence: 99%
“…As machine learning is becoming an integral part of our lives, researchers have been investigating the biases that could arise and how to mitigate them [5,44]. Work on bias mitigation looked into the interpretability and transparency of these models [27,59,60], what industry practitioners need to improve the fairness in ML systems [29,30], and the perceived fairness of biased algorithms in current practices [26,51]. In our work, we looked into how much bias and fairness in algorithms is communicated as an aspect of ML models' quality and who within teams and organizations is interested in this.…”
Section: Algorithmic Biasmentioning
confidence: 99%
“…Two examples include "the degree to which an observer can understand the cause of a decision" [22] and "a method is interpretable if a user can correctly and efficiently predict the method's result" [23]. Furthermore, different roles also have different interpretations [24]. Rather than delimiting what interpretability is, we propose to reach an agreement on its meaning for all project stakeholders.…”
Section: Algorithm Designmentioning
confidence: 99%