Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction 2017
DOI: 10.1145/3029798.3034809
|View full text |Cite
|
Sign up to set email alerts
|

Natural Language Explanations in Human-Collaborative Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…1 Similarly, participants seemed to reliably agree with the claim 'If I do not know that an AI's internal algorithm is working properly, but that AI is able to explain why it makes decisions, I trust that AI' (average response on the Likert scale was 3.25). 2 Thus, as far as the (admittedly limited and preliminary) evidence collected through our survey suggests, for the purpose of generating and sustaining trust in AI-mediated recommendations, these systems would benefit from the ability to explain why they made their decisions, similarly to how a human being would (Kunkel et al, 2019;Scalise et al, 2017). So, what kind of explanations do humans expect from AI?…”
Section: Functional Explanations and Reasonsmentioning
confidence: 97%
“…1 Similarly, participants seemed to reliably agree with the claim 'If I do not know that an AI's internal algorithm is working properly, but that AI is able to explain why it makes decisions, I trust that AI' (average response on the Likert scale was 3.25). 2 Thus, as far as the (admittedly limited and preliminary) evidence collected through our survey suggests, for the purpose of generating and sustaining trust in AI-mediated recommendations, these systems would benefit from the ability to explain why they made their decisions, similarly to how a human being would (Kunkel et al, 2019;Scalise et al, 2017). So, what kind of explanations do humans expect from AI?…”
Section: Functional Explanations and Reasonsmentioning
confidence: 97%
“…Explanations are a core component of human interaction (Scalise et al, 2017;Krening et al, 2017;Madumal et al, 2019). In the context of Machine Learning (ML), explanations should articulate the decision-making process of an ML model explicitly, in a language familiar to people as communicators (De Graaf and Malle, 2017;Miller, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Detailed technical logs of a robot's experience were originally available only to trained researchers (Landsiedel et al, 2017;Scalise et al, 2017). Recent work, however, has generated natural language descriptions of a robot's travelled path from them.…”
Section: Related Workmentioning
confidence: 99%