2010
DOI: 10.1007/s00182-010-0265-3
|View full text |Cite
|
Sign up to set email alerts
|

Why learning doesn’t add up: equilibrium selection with a composition of learning rules

Abstract: In this paper, we investigate the aggregate behavior of populations of learning agents. We compare the outcomes in homogenous populations learning in accordance with imitate the best dynamics and with replicator dynamics to outcomes in populations that mix these two learning rules. New outcomes can emerge. In certain games, a linear combination of the two rules almost always attains an equilibrium that homogenous learners almost never locate. Moreover, even when almost all weight is placed on one learning rule… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…where m is the number of cells of the network. By hypothesis, Inequality (14) holds. Therefore m ≥ 3, which implies m − 1 ≥ √ m. We deduce that:…”
Section: Proof Of the Results On Network With Similar Cellsmentioning
confidence: 81%
See 2 more Smart Citations
“…where m is the number of cells of the network. By hypothesis, Inequality (14) holds. Therefore m ≥ 3, which implies m − 1 ≥ √ m. We deduce that:…”
Section: Proof Of the Results On Network With Similar Cellsmentioning
confidence: 81%
“…, m} (the cells), and whose edges ∆ i,j (the interactions) are directed and weighted. The hypothesis of cooperativity among the individuals, namely ∆ i,j ≥ 0 for all i = j makes the game work in an imitate the best strategy, which produce players that adopt a myopic behaviour ( [14]). In fact, by hypothesis, each cell or player i just knows its own actions to the other players j = i, the value of its own satisfaction variable, and the actions it receives from the other cells.…”
Section: The Network As a Cooperative Game That Evolves On Timementioning
confidence: 99%
See 1 more Smart Citation
“…Letg * (t) := max z∈Z g[z](t) and Z * (t) := argmax z∈Z g[z](t).Then g * : R + → R is Lipschitz continuous, and for almost all t ∈ R + , we have that ġ * (t) = g[z](t) for each z ∈ Z * (t).Proof of Lemma 4.2. (i) This is immediate from Lemma 4.1 iii) 9. For ii), observe that, if x i = 0, then xp i = 0 and thus y p i = m px i − x p i = 0 for all p ∈ P.…”
mentioning
confidence: 89%
“…The representative agent is what we can estimate in an experiment. Thus, a representativeagent model enables us to capture aggregate behavior while recognizing underlying 1 Further motivation to consider heterogeneity in a population of quantal responders comes from recent findings that models of heterogeneous learners often cannot be adequately approximated by representative-agent models with common parameter values for all [24,8,6].…”
Section: Introductionmentioning
confidence: 99%