Learning cycles in Bertrand competition with differentiated commodities and competing learning rules Anufriev, M.; Kopányi, D.; Tuinstra, J.
Link to publication
Citation for published version (APA):Anufriev, M., Kopányi, D., & Tuinstra, J. (2012). Learning cycles in Bertrand competition with differentiated commodities and competing learning rules. (CeNDEF Working Paper;. Amsterdam: CeNDEF, University of Amsterdam.
General rightsIt is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulationsIf you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Abstract This paper stresses the importance of heterogeneity in learning. We introduce competition between dierent learning rules and demonstrate that, though these rules can coexist, their convergence properties are strongly aected by heterogeneity. We consider a Bertrand oligopoly with dierentiated goods.Firms do not have full information about the demand structure and they want to maximize their perceived one-period prot by applying one of two dierent learning rules: OLS learning and gradient learning. We analytically show that the stability of gradient learning depends on the distribution of learning rules over rms. In particular, as the number of gradient learners increases, gradient learning may become unstable. We study the competition between the learning rules by means of computer simulations and illustrate that this change in stability for gradient learning may lead to cyclical switching between the rules. Stable gradient learning typically gives higher average prot than OLS learning, making rms switch to gradient learning. This however, can destabilize gradient learning which, because of decreasing prots, makes rms switch back to OLS learning. This cycle may repeat itself indenitely.JEL classication: C63, C72, D43