1Human decisions are known to be systematically biased. A prominent example of such a bias 2 occurs when integrating a sequence of sensory evidence over time. Previous empirical studies di↵er 3 in the nature of the bias they observe, ranging from favoring early evidence (primacy), to favoring 4 late evidence (recency). Here, we present a unifying framework that explains these biases and 5 makes novel psychophysical and neurophysiological predictions. By explicitly modeling both the 6 approximate and the hierarchical nature of inference in the brain, we show that temporal biases 7 depend on the balance between "sensory information" and "category information" in the stimulus.
8Finally, we present new data from a human psychophysics task that confirm that temporal biases 9 can be robustly changed within subjects as predicted by our models.
10Imagine a doctor trying to infer the cause of a patient's symptoms from an x-ray image. Unsure 12 about the evidence in the image, she asks a radiologist for a second opinion. If she tells the 13 radiologist her suspicion, she may bias his report. If she does not, he may not detect a faint 14 diagnostic pattern. As a result, if the evidence in the image is hard to detect or ambiguous, 15 the radiologist's second opinion, and hence the final diagnosis, may be swayed by the doctor's 16 initial hypothesis. The problem faced by these doctors exemplifies the di culty of hierarchical 17 inference: each doctor's suspicion both informs and is informed by their collective diagnosis. If 18 they are not careful, their diagnosis may fall prey to circular reasoning. The brain faces a similar 19 problem during perceptual decision-making: any decision-making area combines sequential signals 20 from sensory brain areas, not directly from sensory input, just as the doctors' consensus is based 21 on their individual diagnoses rather than on the evidence per se. If sensory signals in the brain 22 themselves reflect inferences that combine both prior expectations and sensory evidence, we suggest 23 that this can then lead to an observable perceptual confirmation bias (Nickerson, 1998). 24 We formalize this idea in the context of approximate Bayesian inference and classic evidence-25 integration tasks in which a range of biases has been observed and for which a unifying explanation 26 1 is currently lacking. Evidence-integration tasks require subjects to categorize a sequence of inde-27 pendent and identically distributed (iid) draws of stimuli (Gold and Shadlen, 2007; Bogacz et al., 28 2006). Previous normative models of evidence integration hinge on two quantities: the amount of 29 information available on a single stimulus draw and the total number of draws. One might expect, 30 then, that temporal biases should have some canonical form in tasks where these quantities are 31 matched. However, existing studies are heterogeneous, reporting one of three distinct motifs: some 32 find that early evidence is weighted more strongly (a primacy e↵ect) (Kiani et al., 2008; Nienborg 33 and Cu...