Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in oneshot decisions under risk, and to exhibit the opposite bias when they rely on past experience.The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current paper analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and four additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate And Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts.Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values.
Previous research demonstrates that feedback in decisions under risk leads people to behave as if they give less weight to rare events. We clarify the boundaries of this phenomenon and shed light on the underlying mechanisms. In a preregistered experiment, participants faced 60 different decisions-under-risk choice tasks. Each task was a choice between a safe prospect (e.g., "59 with certainty") and a "rare disaster" gamble ("60 with p ϭ .98; 10 otherwise"). Additionally, each option also incurred a small cost (a draw from the set {Ϫ8, Ϫ6, Ϫ4, Ϫ2, 0} was added to the payoff). After each choice, participants received full feedback concerning the (total) realized payoffs of each option. The experiment compared 2 conditions that differed in the dependency between the 2 added costs. The results reveal high sensitivity to this dependency. Underweighting of rare events (preference for the rare disaster gamble) emerged with experience only when this dependency implied that in most cases, the rare disaster alternative provides a higher outcome than the safe alternative. In contrast, when in most cases the final outcomes from the safer option were higher, feedback appeared to increase the weighting of rare events (i.e., increased preference for the safe option). Common decisions-under-risk models (e.g., prospect theory) that assume the value of each prospect is judged only as a function of its own payoff distribution cannot account for this difference. Yet, the results can be explained with the hypothesis that choice reflects reliance on small samples of past experiences with similar decision tasks.
New technology can be used to enhance safety by imposing costs, or taxes, on certain reckless behaviors. The current paper presents two pre-registered experiments that clarify the impact of taxation of this type on decisions from experience between three alternatives. Experiment 1 focuses on an environment in which safe choices maximize expected returns and examines the impact of taxing the more attractive of two risky options. The results reveal a U-shaped effect of taxation: some taxation improves safety, but too much taxation impairs safety. Experiment 2 shows a clear negative effect of high taxation even when the taxation eliminates the expected benefit from risk-taking. Comparison of alternative models suggests that taxing reckless behaviors backfires when it significantly increases the proportion of experiences in which a more dangerous behavior leads to better outcomes than the taxed behavior. Qualitative hypotheses derived from naïve sampling models assuming small samples were only partially supported by the data.
Experience is the best teacher. Yet, in the context of repeated decisions, experience was found to trigger deviations from maximization in the direction of underweighting of rare events. Evaluations of alternative explanations for this bias led to contradicting conclusions. Studies that focused on the aggregate choice rates, including a series of choice prediction competitions, favored the assumption that this bias reflects reliance on small samples. In contrast, studies that focused on individual decisions suggest that the bias reflects a strong myopic tendency by a significant minority of participants. The current analysis clarifies the apparent inconsistency by reanalyzing a data set that previously led to contradicting conclusions. Our analysis suggests that the apparent inconsistency reflects the differing focus of the cognitive models. Specifically, sequential adjustment models (that assume sensitivity to the payoffs’ weighted averages) tend to find support for the hypothesis that the deviations from maximization are a product of strong positive recency (a form of myopia). Conversely, models assuming random sampling of past experiences tend to find support to the hypothesis that the deviations reflect reliance on small samples. We propose that the debate should be resolved by using the assumptions that provide better predictions. Applying this solution to the data set we analyzed shows that the random sampling assumption outperforms the weighted average assumption both when predicting the aggregate choice rates and when predicting the individual decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.