Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents 2019
DOI: 10.1145/3308532.3329441
|View full text |Cite
|
Sign up to set email alerts
|

"Do you trust me?"

Abstract: While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an ongoing resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the sp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 86 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…Transparency is known to be a significant factor in trusting automated data science systems. However, much of the literature states a lack of transparency in AI [5,64,65]. We believe exploring the information needs and individual differences of data scientists can inform us of possible ways to increase trust in AutoML tools.…”
Section: Resultsmentioning
confidence: 99%
“…Transparency is known to be a significant factor in trusting automated data science systems. However, much of the literature states a lack of transparency in AI [5,64,65]. We believe exploring the information needs and individual differences of data scientists can inform us of possible ways to increase trust in AutoML tools.…”
Section: Resultsmentioning
confidence: 99%
“…While existing methods discuss user trust as a broad goal, to the best of our knowledge, their ability to help regulators or decision-makers adequately trust ML models is not demonstrated through well-defined evaluations or user trials. The experimental work on the notion of trust has relied on subjective, self-reported measures of trust in performing a simplified task (Ribeiro et al, 2016; Weitz et al, 2019; Buçinca et al, 2020). However, Jacovi et al (2021) in their effort of formalizing the notion of trust in ML, argue that simply asking the user whether they trust the model for a simple task does not evaluate the notion of trust in AI, as the users are not assuming any risk, and they argue that relying on an AI with assumed risk is a prerequisite for trust.…”
Section: Current State Of Explainable MLmentioning
confidence: 99%
“…"... systems that can explain their rationale to a human user, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future" [6] Anthropomorphising technology in this way has implications for technology acceptance (see [7]; and Section 4.3 below). Surveys by Adadi and Berrada [2] and Arrieta and his colleagues [5] focus primarily on mapping out the domain from recent research.…”
Section: Responsible and Explainable Aimentioning
confidence: 99%
“…The DARPA program seeks to support "explanation-informed acceptance" via an understanding of the socio-cognitive context of explanation [6]. Picking up on such a human-mediated approach, Weitz and her colleagues demonstrate how even simple methods, in their case the use of an avatar-like component, encourage and enhance perceptions of understanding the technology [7]. Taking this further and echoing [9] on trust, Israelsen and Ahmed, meanwhile, focus on trust-enhancing "algorithmic assurances" which echo traditional constructs like trustworthiness indicators in the trust literature (see Section 4.4) [11].…”
Section: Responsible and Explainable Aimentioning
confidence: 99%