We here present both experimental and theoretical results for an Anticipation Game, a two-stage game wherein the standard Dictator Game is played after a matching phase wherein receivers use the past actions of dictators to decide whether to interact with them. The experimental results for three different treatments show that partner choice induces dictators to adjust their donations towards the expectations of the receivers, giving significantly more than expected in the standard Dictator Game. Adding noise to the dictators’ reputation lowers the donations, underlining that their actions are determined by the knowledge provided to receivers. Secondly, we show that the recently proposed stochastic evolutionary model where payoff only weakly drives evolution and individuals can make mistakes requires some adaptations to explain the experimental results. We observe that the model fails in reproducing the heterogeneous strategy distributions. We show here that by explicitly modelling the dictators’ probability of acceptance by receivers and introducing a parameter that reflects the dictators’ capacity to anticipate future gains produces a closer fit to the aforementioned strategy distributions. This new parameter has the important advantage that it explains where the dictators’ generosity comes from, revealing that anticipating future acceptance is the key to success.
The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to “win”. Starting from a baseline model that describes a broad class of technology races where winners draw a significant benefit compared to others (such as AI advances, patent race, pharmaceutical technologies), we investigate here how positive (rewards) and negative (punishments) incentives may beneficially influence the outcomes. We uncover conditions in which punishment is either capable of reducing the development speed of unsafe participants or has the capacity to reduce innovation through over-regulation. Alternatively, we show that, in several scenarios, rewarding those that follow safety measures may increase the development speed while ensuring safe choices. Moreover, in the latter regimes, rewards do not suffer from the issue of over-regulation as is the case for punishment. Overall, our findings provide valuable insights into the nature and kinds of regulatory actions most suitable to improve safety compliance in the contexts of both smooth and sudden technological shifts.
Agents make commitments towards others in order to influence others in a certain way, often by dismissing more profitable options. Most commitments depend on some incentive that is necessary to ensure that the action is in the agent's interest and thus, may be carried out to avoid eventual penalties. The capacity for using commitment strategies effectively is so important that natural selection may have shaped specialized capacities to make this possible. Evolutionary explanations for commitment, particularly its role in the evolution of cooperation, have been actively sought for and discussed in several fields, including Psychology and Philosophy. In this paper, using the tools of evolutionary game theory, we provide a new model showing that individuals tend to engage in commitments, which leads to the emergence of cooperation even without assuming repeated interactions. The model is characterized by two key parameters: the punishment cost of failing commitment imposed on either side of a commitment, and the cost of managing the commitment deal. Our analytical results and extensive computer simulations show that cooperation can emerge if the punishment cost is large enough compared to the management cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.