Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1–3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control—even a slight amount—over an imperfect algorithm’s forecast. Data, as supplemental material, are available at https://doi.org/10.1287/mnsc.2016.2643 . This paper was accepted by Yuval Rottenstreich, judgment and decision making.
Will people use self-driving cars, virtual doctors, and other algorithmic decision-makers if they outperform humans? The answer depends on the uncertainty inherent in the decision domain. We propose that people have diminishing sensitivity to forecasting error and that this preference results in people favoring riskier (and often worse-performing) decision-making methods, such as human judgment, in inherently uncertain domains. In nine studies ( N = 4,820), we found that (a) people have diminishing sensitivity to each marginal unit of error that a forecast produces, (b) people are less likely to use the best possible algorithm in decision domains that are more unpredictable, (c) people choose between decision-making methods on the basis of the perceived likelihood of those methods producing a near-perfect answer, and (d) people prefer methods that exhibit higher variance in performance (all else being equal). To the extent that investing, medical decision-making, and other domains are inherently uncertain, people may be unwilling to use even the best possible algorithm in those domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.