2018
DOI: 10.1007/978-3-319-90403-0_1
|View full text |Cite
|
Sign up to set email alerts
|

2D Transparency Space—Bring Domain Users and Machine Learning Experts Together

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…However, much of the existing research focused on explaining the underlying algorithm [69]; and designers may assume that users have advanced knowledge of machine learning [69], or users have to understand the decision process (e.g., [59,67]). Besides machine learning experts, many other end users can also benefit from machine learning [41,129], and sometimes human-machine collaborations exhibit better performance than the human or the machine alone [4,30,115]. Yet, end users could be domain experts who may not have the background to understand how the algorithm operates.…”
Section: Introductionmentioning
confidence: 99%
“…However, much of the existing research focused on explaining the underlying algorithm [69]; and designers may assume that users have advanced knowledge of machine learning [69], or users have to understand the decision process (e.g., [59,67]). Besides machine learning experts, many other end users can also benefit from machine learning [41,129], and sometimes human-machine collaborations exhibit better performance than the human or the machine alone [4,30,115]. Yet, end users could be domain experts who may not have the background to understand how the algorithm operates.…”
Section: Introductionmentioning
confidence: 99%
“…Visibility is operationalized, for example as interacting with AI/ML processes (via a user interface), providing visual analytics, and displaying (e.g., mapping) probabilities or clusters (Sacha et al, 2017). Explainability refers to the interpretability of AI/ML to developers and end users, for instance highlighting which parts of a feature influence its classification or why inferences are made, based, for example, on keywords (Zhou & Chen, 2018b). In a human-centred deployment of AI/ML, we must explain all steps even when some have obvious reasoning and show high interpretability (e.g., translating French to English).…”
Section: Employing Nlp and H CML To Re S E Arch The S Mart Cit Ymentioning
confidence: 99%
“…aspx?id=32592). Transparency refers to explicating how algorithms act on the data, generate data, how algorithms deal with uncertainty, and how to connect the end users with the thought processes of the algorithm developers (Zhou & Chen, 2018b).…”
Section: Employing Nlp and H CML To Re S E Arch The S Mart Cit Ymentioning
confidence: 99%
See 1 more Smart Citation
“…As ML models can provide incorrect predictions [CLG * 15], ML experts have to check whether the model functions properly [TKC17]. Also, domain experts in critical fields need to understand how a specific prediction has been reached in order to trust in ML [ZC18]. For example, in medicine, a physician might not rely on a model without explanations of how and why it forms a prediction, since patient lives are at risk [RSG16,HTF01,LBL16].…”
Section: Introductionmentioning
confidence: 99%