We analyze a two-player game of strategic experimentation with two-armed bandits. Each player has to decide in continuous time whether to use a safe arm with a known payoff or a risky arm whose likelihood of delivering payoffs is initially unknown. The quality of the risky arms is perfectly negatively correlated between players. In marked contrast to the case where both risky arms are of the same type, we find that learning will be complete in any Markov perfect equilibrium if the stakes exceed a certain threshold, and that all equilibria are in cutoff strategies. For low stakes, the equilibrium is unique, symmetric, and coincides with the planner's solution. For high stakes, the equilibrium is unique, symmetric, and tantamount to myopic behavior. For intermediate stakes, there is a continuum of equilibria. * We are grateful to Philippe Aghion,
Background and Aims Recent parsimony-based reconstructions suggest that seeds of early angiosperms had either morphophysiological or physiological dormancy, with the former considered as more probable. The aim of this study was to determine the class of seed dormancy present in Amborella trichopoda, the sole living representative of the most basal angiosperm lineage Amborellales, with a view to resolving fully the class of dormancy present at the base of the angiosperm clade.Methods Drupes of A. trichopoda without fleshy parts were germinated and dissected to observe their structure and embryo growth. Pre-treatments including acid scarification, gibberellin treatment and seed excision were tested to determine their influence on dormancy breakage and germination. Character-state mapping by maximum parsimony, incorporating data from the present work and published sources, was then used to determine the likely class of dormancy present in early angiosperms.Key Results Germination in A. trichopoda requires a warm stratification period of at least approx. 90 d, which is followed by endosperm swelling, causing the water-permeable pericarp-mesocarp envelope to split open. The embryo then grows rapidly within the seed, to radicle emergence some 17 d later and cotyledon emergence after an additional 24 d. Gibberellin treatment, acid scarification and excision of seeds from the surrounding drupe tissues all promoted germination by shortening the initial phase of dormancy, prior to embryo growth.Conclusions Seeds of A. trichopoda have non-deep simple morphophysiological dormancy, in which mechanical resistance of the pericarp-mesocarp envelope plays a key role in the initial physiological phase. Maximum parsimony analyses, including data obtained in the present work, indicate that morphophysiological dormancy is likely to be a pleisiomorphic trait in flowering plants. The significance of this conclusion for studies of early angiosperm evolution is discussed.
This paper analyzes the case of a principal who wants to provide an agent with proper incentives to explore a hypothesis that can be either true or false. The agent can shirk, thus never proving the hypothesis, or he can avail himself of a known technology to produce fake successes. This latter option either makes the provision of incentives for honesty impossible or does not distort its costs at all. In the latter case, the principal will optimally commit to rewarding later successes even though he only cares about the first one. Indeed, after an honest success, the agent is more optimistic about his ability to generate further successes. This, in turn, provides incentives for the agent to be honest before a first success.
We experimentally implement a dynamic public‐good problem, where the public good in question is the dynamically evolving information about agents' common state of the world. Subjects' behavior is consistent with free‐riding because of strategic concerns. We also find that subjects adopt more complex behaviors than predicted by the welfare‐optimal equilibrium, such as noncut‐off behavior, lonely pioneers, and frequent switches of action.
We examine a two-player game with two-armed exponential bandits à la (Keller et al. in Econometrica 73:39-68, 2005), where players operate different technologies for exploring the risky option. We characterise the set of Markov perfect equilibria and show that there always exists an equilibrium in which the player with the inferior technology uses a cutoff strategy. All Markov perfect equilibria imply the same amount of experimentation but differ with respect to the expected speed of the resolution of uncertainty. If and only if the degree of asymmetry between the players is high enough, there exists a Markov perfect equilibrium in which both players use cutoff strategies. Whenever this equilibrium exists, it welfare dominates all other equilibria. This contrasts with the case of symmetric players, where there never exists a Markov perfect equilibrium in cutoff strategies. Keywords Two-armed bandit • Heterogeneous agents • Free riding • Learning JEL Classification C73 • D83 • O31 The second author gratefully acknowledges support from the Social Sciences and Humanities Research Council of Canada. Part of the results presented in this paper was already contained in the third author's undergraduate thesis, entitled "Strategisches Experimentieren mit asymmetrischen Spielern", which she submitted at the University of Munich in 2009 under her maiden name Tönjes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.