In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.
Human beings perform well in uncertain environments, matching the performance of complex probabilistic models in complex tasks such as language or physical system prediction. Yet people’s judgments about probabilities also display well-known biases. How can this be? Recently cognitive scientists have explored the possibility that the same sampling algorithms that are used in computer science to approximate complex probabilistic models are also used in the mind and the brain. We the review experimental evidence that characterises the human sampling algorithm, and discuss how such an algorithm could potentially explain apects of the movement of asset prices in financial markets. We also discuss how many of the biases that people display may be the direct result of using only a small number of samples, but using them efficiently. As human beings make successful real-time decisions using only rough estimates of uncertainty, this suggests that machine intelligence could do the same.
Human probability judgments are variable and subject to systematic biases. Sampling-based accounts of probability judgment have successfully explained such idiosyncrasies by assuming that people remember or simulate instances of events and base their judgments on sampled frequencies. Biases have been explained either by noise corrupting sample accumulation (the Probability Theory + Noise account), or as a Bayesian adjustment to the uncertainty implicit in small samples (the Bayesian sampler). While these two accounts closely mimic one another, here we show that they can be distinguished by a novel linear regression method that relates the variance of repeated judgments to their means. First, the efficacy of the method is confirmed by model recovery, and it more accurately recovers parameters than computationally complex methods. Second, the method is applied to both existing and new probability judgment data, which confirm that judgments are based on a small number of samples that are adjusted by a prior, as predicted by the Bayesian sampler.
Estimation, choice, confidence, and response times are the primary behavioural measures in perceptual and cognitive tasks. These measures have attracted extensive modeling efforts in the cognitive sciences, but there is the lack of a unified approach to explain all measures simultaneously within one framework. We propose an Autocorrelated Bayesian Sampler (ABS), assuming that people sequentially sample from a posterior probability distribution of hypotheses on each trial of a perceptual or cognitive task. Unlike most accounts of choice, we do not assume that the system has global knowledge of the posterior distribution. Instead it uses a sophisticated sampling algorithm to make do with local knowledge, and so produces autocorrelated samples. The model assumes that each sample takes time to generate, and that samples are used by well-validated mechanisms to produce estimates, choices, and confidence judgments. This relatively simple framework clears a number of well-known empirical hurdles for models of choice, confidence, and response time. The autocorrelation be- tween samples also allows the model to predict the long-range between-trial dependence observed in both estimates and response times.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.