2006
DOI: 10.1016/j.geb.2006.03.013
|View full text |Cite
|
Sign up to set email alerts
|

An initial implementation of the Turing tournament to learning in repeated two-person games

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
1

Year Published

2009
2009
2017
2017

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(28 citation statements)
references
References 22 publications
0
27
0
1
Order By: Relevance
“…The second is the organization of an open choice prediction competition that facilitates the evaluation of a wide class of models. Specifically, Erev et al organized a simplified version of the competition run by Arifovic, McKelvey, and Pevnitskaya [10]. They ran two large experiments examining different problems drawn randomly from the same space, and challenged other researchers to predict the results of the second study based on evaluation of the results of the first study.…”
Section: Introductionmentioning
confidence: 99%
“…The second is the organization of an open choice prediction competition that facilitates the evaluation of a wide class of models. Specifically, Erev et al organized a simplified version of the competition run by Arifovic, McKelvey, and Pevnitskaya [10]. They ran two large experiments examining different problems drawn randomly from the same space, and challenged other researchers to predict the results of the second study based on evaluation of the results of the first study.…”
Section: Introductionmentioning
confidence: 99%
“…But there is a second sampling equilibrium at (0.20, 0.28, 0.52) in which both strictly dominated actions are played with positive probability. It turns out that only the latter equilibrium is stable under the dynamics (2). In fact, the basin of attraction of this equilibrium is the entire unit simplex, excluding the single point (0, 0, 1).…”
Section: Sampling Equilibriummentioning
confidence: 99%
“…Rapoport and Chammah (1965) and Rapoport, Guyer, and Gordon (1976)). Yet standard learning algorithms have limited success in capturing the degree of cooperation in the Prisoner's Dilemma game and the alternation between the two pure-strategy Nash equilibria in the Battle of the Sexes game (Arifovic, McKelvey, and Pevnitskaya (2006)). Along similar lines, Erev and Haruvy (2013) point out that "human agents exhibit higher social intelligence and/or sensitivity than assumed by the basic learning models" (p. 61).…”
Section: The Gamesmentioning
confidence: 99%