2020
DOI: 10.1007/978-3-030-51924-7_5
|View full text |Cite
|
Sign up to set email alerts
|

Towards the Role of Theory of Mind in Explanation

Abstract: Theory of Mind is commonly defined as the ability to attribute mental states (e.g., beliefs, goals) to oneself, and to others. A large body of previous work-from the social sciences to artificial intelligence-has observed that Theory of Mind capabilities are central to providing an explanation to another agent or when explaining that agent's behaviour. In this paper, we build and expand upon previous work by providing an account of explanation in terms of the beliefs of agents and the mechanism by which agents… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 45 publications
0
10
0
Order By: Relevance
“…A recent research direction that is closely related to the proposed notion of explanation is that by Shvo, Klassen, and McIlraith (2020), where they propose a general belief-based framework for generating explanations that employs epistemic state theory to capture the models of the explainer (agent in this paper) and the explainee (human user in this paper), and incorporates a belief revision operator to assimilate explanations into the explainee's epistemic states. A main difference with our proposed framework is that our framework restricts knowledge to be stored in logical formulae, while theirs considers epistemic states that can characterize different types of problems and have no such restriction.…”
Section: Some Further Discussionmentioning
confidence: 99%
“…A recent research direction that is closely related to the proposed notion of explanation is that by Shvo, Klassen, and McIlraith (2020), where they propose a general belief-based framework for generating explanations that employs epistemic state theory to capture the models of the explainer (agent in this paper) and the explainee (human user in this paper), and incorporates a belief revision operator to assimilate explanations into the explainee's epistemic states. A main difference with our proposed framework is that our framework restricts knowledge to be stored in logical formulae, while theirs considers epistemic states that can characterize different types of problems and have no such restriction.…”
Section: Some Further Discussionmentioning
confidence: 99%
“…In either case, we can predict that it will actively and intentionally seek behavior that aligns with its desires (Bennett and Maruyama, 2021 ). In this way, consistency in behaviors, helps develop accurate theory of mind (Shvo et al, 2020 ), and enables us to better interact, and establish efficient and effective communication (Rabinowitz et al, 2018 ), which are all crucial for collaboration with human teammates (Shergadwala and El-Nasr, 2021 ).…”
Section: Transparency and Trust In Human-agent Interaction Through Atommentioning
confidence: 99%
“…As described, these are based on internal models of contextually-relevant information and of prior interactions. ASI can use this to develop tailored interactions that make their intentions clear to users (Shvo et al, 2020 ). Agents with well-developed AToM are better able to communicate information that allows the user to understand why it took a certain action over another.…”
Section: Transparency and Trust In Human-agent Interaction Through Atommentioning
confidence: 99%
“…Much research has been done on how it is possible to attribute beliefs and desires [4,5] in AI-based systems. However, we argue that the attribution of intentions, esp.…”
Section: Introductionmentioning
confidence: 99%