Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction 2017
DOI: 10.1145/2909824.3020233
|View full text |Cite
|
Sign up to set email alerts
|

Improving Robot Controller Transparency Through Autonomous Policy Explanation

Abstract: Shared expectations and mutual understanding are critical facets of teamwork. Achieving these in human-robot collaborative contexts can be especially challenging, as humans and robots are unlikely to share a common language to convey intentions, plans, or justifications. Even in cases where human co-workers can inspect a robot's control code, and particularly when statistical methods are used to encode control policies, there is no guarantee that meaningful insights into a robot's behavior can be derived or th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
192
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 178 publications
(197 citation statements)
references
References 48 publications
0
192
0
Order By: Relevance
“…Some of the answers to these questions lie more in the realm of cognitive science. Still, natural language and other communication modalities could be used [58]. Speci cally, Olah et al [104] investigate how predictions of NNs can be explained through visualizing how di erent parts of the network respond to certain images.…”
Section: Commonmentioning
confidence: 99%
“…Some of the answers to these questions lie more in the realm of cognitive science. Still, natural language and other communication modalities could be used [58]. Speci cally, Olah et al [104] investigate how predictions of NNs can be explained through visualizing how di erent parts of the network respond to certain images.…”
Section: Commonmentioning
confidence: 99%
“…This resurgence is driven by evidence that many AI applications have limited take up, or are not appropriated at all, due to ethical concerns [2] and a lack of trust on behalf of their users [166,101]. The running hypothesis is that by building more transparent, interpretable, or explainable systems, users will be better equipped to understand and therefore trust the intelligent agents [129,25,65].…”
Section: Introductionmentioning
confidence: 99%
“…This is usually a fundamental step toward selecting the optimal action in both cases of cooperation or competition. Another application is human and artificial agents behavioral modeling which is required for both successful human-AI collaboration and transparent (explainable) Artificial Intelligence (AI) [17,18,19,20,21].…”
Section: Related Workmentioning
confidence: 99%