The Neural Basis of Mentalizing 2021
DOI: 10.1007/978-3-030-51890-5_15
|View full text |Cite
|
Sign up to set email alerts
|

Computational Models of Mentalizing

Abstract: Humans have a remarkable ability to infer and represent others' mental states such as their beliefs, goals, desires, intentions, and feelings. In this chapter, we review some of the innovations that have developed in economics, computer science, and cognitive neuroscience in modeling the computations underlying several mentalizing operations. Broadly, this involves building models of how agents infer the mental states of other agents within constrained environments. These models include modules for: representi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 91 publications
0
12
0
Order By: Relevance
“…We also focus more on preference ToM (algorithms which focus on inferring preferences, rather than more generally on inferring mental states), as there has arguably been more recent work on preference-ToM than on belief-ToM (e.g., in the fields of Inverse Reinforcement Learning—IRL—and Preference Learning), and it has been shown that ToM can indeed be cast as an IRL problem (Jara-Ettinger, 2019 ). For broader surveys with less of a focus on very recent IRL and preference learning algorithms, see Rusch et al ( 2020 ) and Gonzalez and Chang ( 2021 ). We also focus on first-order ToM models, since first-order ToM is already challenging enough for current AI models.…”
Section: Computational and Preference Tom In Artificial Intelligencementioning
confidence: 99%
See 4 more Smart Citations
“…We also focus more on preference ToM (algorithms which focus on inferring preferences, rather than more generally on inferring mental states), as there has arguably been more recent work on preference-ToM than on belief-ToM (e.g., in the fields of Inverse Reinforcement Learning—IRL—and Preference Learning), and it has been shown that ToM can indeed be cast as an IRL problem (Jara-Ettinger, 2019 ). For broader surveys with less of a focus on very recent IRL and preference learning algorithms, see Rusch et al ( 2020 ) and Gonzalez and Chang ( 2021 ). We also focus on first-order ToM models, since first-order ToM is already challenging enough for current AI models.…”
Section: Computational and Preference Tom In Artificial Intelligencementioning
confidence: 99%
“…We also focus on first-order ToM models, since first-order ToM is already challenging enough for current AI models. See Gonzalez and Chang ( 2021 ) for a discussion about computational models of higher-order ToM and higher-order ToM in humans and Arslan et al ( 2017b ) for an example of a model of children's development of second-order ToM. Some examples of recent proposals of recursive reasoning models, relevant for higher-order ToM (in a Reinforcement Learning framework), include Wen et al ( 2019 ) and Moreno et al ( 2021 ).…”
Section: Computational and Preference Tom In Artificial Intelligencementioning
confidence: 99%
See 3 more Smart Citations