2021
DOI: 10.48550/arxiv.2101.01625
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery

Devleena Das,
Siddhartha Banerjee,
Sonia Chernova

Abstract: With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…We implemented a proof-of-concept using tabular Qlearning, but we want to test our approach with more complex learning algorithms and policies outside RL. Finally, our approach can complement work on failure explainability [43], e.g., to use OAs for explaining failures to users.…”
Section: B Broader Applicability Of Our Approachmentioning
confidence: 99%
“…We implemented a proof-of-concept using tabular Qlearning, but we want to test our approach with more complex learning algorithms and policies outside RL. Finally, our approach can complement work on failure explainability [43], e.g., to use OAs for explaining failures to users.…”
Section: B Broader Applicability Of Our Approachmentioning
confidence: 99%
“…and by presenting the language connected to this in advance. Ehsan et al [69] and Das et al [72] proposed a framework for directly generating linguistic explanations from agent state sequences using an encoder-decoder model.…”
Section: Verbalization and Visualization Of Explanationsmentioning
confidence: 99%
“…Other XAIP user studies use one domain(Chakraborti et al 2019b;Chakraborti and Kambhampati 2019;Sreedharan et al 2019aLindsay et al 2020;Das, Banerjee, and Chernova 2021), two domains(Sreedharan et al 2019b;Sreedharan, Srivastava, and Kambhampati 2020) or four domains(Krarup et al 2021). …”
mentioning
confidence: 99%