2020
DOI: 10.48550/arxiv.2007.09028
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sequential Explanations with Mental Model-Based Policies

Abstract: The act of explaining across two parties is a feedback loop, where one provides information on what needs to be explained and the other provides an explanation relevant to this information. We apply a reinforcement learning framework which emulates this format by providing explanations based on the explainee's current mental model. We conduct novel online human experiments where explanations generated by various explanation methods are selected and presented to participants, using policies which observe partic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…In parallel, significant research in human-AI partnership has considered how to directly optimize the team performance in AI-assisted decision-making Lai et al (2020); Lai and Tan (2019); Bansal et al (2020Bansal et al ( , 2019b; Green and Chen (2019); Okamura and Yamada (2020). The perception of the human agents plays an important role in collaborative decision-making (Yeung et al, 2020;Bansal et al, 2019a;Lee, 2018). These works experiment with several heuristic-driven explanation strategies that only partially take into account the characteristics of the human at the end of the decision-making pipeline.…”
Section: Related Workmentioning
confidence: 99%
“…In parallel, significant research in human-AI partnership has considered how to directly optimize the team performance in AI-assisted decision-making Lai et al (2020); Lai and Tan (2019); Bansal et al (2020Bansal et al ( , 2019b; Green and Chen (2019); Okamura and Yamada (2020). The perception of the human agents plays an important role in collaborative decision-making (Yeung et al, 2020;Bansal et al, 2019a;Lee, 2018). These works experiment with several heuristic-driven explanation strategies that only partially take into account the characteristics of the human at the end of the decision-making pipeline.…”
Section: Related Workmentioning
confidence: 99%
“…However, in interpersonal explanations, it is desirable to change not only the information to be explained but also the manner in which the explanation is presented according to the user. Therefore, Yeung et al [70] proposed a method to select the best available explanation presentation method by incorporating the process of explanation presentation and user understanding into a reinforcement learning framework. In order to apply the method to real-world problems, further discussion on reward design and training data acquisition methods would be beneficial.…”
Section: Verbalization and Visualization Of Explanationsmentioning
confidence: 99%
“…Hilgard et al (2019) learn to visualize high-dimensional examples to assist users with one-step classification tasks, whereas we focus on sequential decision-making and make minimal assumptions about the desired task. Yeung et al (2020) use a human-in-the-loop reinforcement learning method to train an agent to sequentially explain black-box model predictions to a human auditor, where the agent is rewarded for causing the user's mental model of the predictive model to match the actual predictive model. Our work differs in that it focuses on improving users' situational awareness in control tasks with partial observations, rather than improving model interpretability.…”
Section: Assistive State Estimationmentioning
confidence: 99%