2020
DOI: 10.31234/osf.io/rcq25
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Humans depart from optimal computational models of socially interactive decision-making under partial information

Abstract: Decision making under uncertainty and under incomplete evidence in multiagent settings is of increasing interest in decision science, assistive robotics, and machine assisted cognition. The degree to which human agents depart from computationally optimal solutions in socially interactive settings is generally unknown. Yet, this knowledge is critical for advances in these areas. Such understanding also provides insight into how competition and cooperation affect human interaction and the underlying contribution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…Applying formal learning frameworks to complicated social behaviors, including how we learn to trust or when we empathize, has been beneficial for gaining mathematical tractability in a fuzzy and complex psychological space (Cushman & Gershman, 2019). Indeed, the most striking examples of how formal learning models map onto sophisticated moral beliefs occur by carefully crafting learning algorithms and applying them to moral content (FeldmanHall & Nassar, 2021;Griffiths et al, 2010;Park et al, 2019;Steixner-Kumar et al, 2020;van Baar et al, 2022), ideally in ways that shape realworld issues, such as political polarization (Rathje et al, 2021). The success of such models in capturing learning as it relates to moral phenomena highlights the need for researchers across disciplines to embrace the role of social learning in shaping moral beliefs and the role of reward learning algorithms in reinforcing increasingly extreme moral judgments.…”
mentioning
confidence: 99%
“…Applying formal learning frameworks to complicated social behaviors, including how we learn to trust or when we empathize, has been beneficial for gaining mathematical tractability in a fuzzy and complex psychological space (Cushman & Gershman, 2019). Indeed, the most striking examples of how formal learning models map onto sophisticated moral beliefs occur by carefully crafting learning algorithms and applying them to moral content (FeldmanHall & Nassar, 2021;Griffiths et al, 2010;Park et al, 2019;Steixner-Kumar et al, 2020;van Baar et al, 2022), ideally in ways that shape realworld issues, such as political polarization (Rathje et al, 2021). The success of such models in capturing learning as it relates to moral phenomena highlights the need for researchers across disciplines to embrace the role of social learning in shaping moral beliefs and the role of reward learning algorithms in reinforcing increasingly extreme moral judgments.…”
mentioning
confidence: 99%