2019
DOI: 10.2139/ssrn.3485475
|View full text |Cite
|
Sign up to set email alerts
|

The Behavioral Economics of Artificial Intelligence: Lessons from Experiments with Computer Players

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 84 publications
(117 reference statements)
0
3
0
Order By: Relevance
“…In [8] it is pointed out that more experimental research is needed to really understand how human strategic decision-making changes when interacting with autonomous agents. Following on this [9], compiles a review of more than 90 experimental studies that have made use of computerized players. Its main conclusions validate that indeed, human behavior changes when some of the other players are artificial, and furthermore, the behavior deviates to become more rational (or in other words, selfish), where humans are observed to actually try to exploit the artificial players.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [8] it is pointed out that more experimental research is needed to really understand how human strategic decision-making changes when interacting with autonomous agents. Following on this [9], compiles a review of more than 90 experimental studies that have made use of computerized players. Its main conclusions validate that indeed, human behavior changes when some of the other players are artificial, and furthermore, the behavior deviates to become more rational (or in other words, selfish), where humans are observed to actually try to exploit the artificial players.…”
Section: Related Workmentioning
confidence: 99%
“…Even though many different works have advocated for the introduction of beneficial AI to promote human prosociality [5][6][7], others have pointed out that humans may be keen to exploit this benevolent AI behavior in their own favor [8][9][10][11]. Thus, before flooding society with AI applications with the promise that they could solve some of the most pressing societal issues, it is worth asking: What behavioral responses can be expected in the presence of AI partners?…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, an analysis of additional results shows that the effect of autonomy is not so clear and using these technologies in social or moral context could have clear detrimental effects. For example, recent investigations suggest that people tend to act more selfishly when they are in interaction against a computer player [95] and are more prompt for cheating [96]. Manistersky et al [97] reported that, in a resource allocation game, participants who played the game through selfdesigned autonomous agents, designed autonomous systems to improve their own performance and less for cooperation, contrasting with the results found by [91].…”
Section: The Ugly: Autonomous Systems and Moral Decision-makingmentioning
confidence: 99%