Making good decisions requires updating beliefs according to new evidence. This is a dynamical process that is prone to biases: in some cases, beliefs become entrenched and resistant to new evidence (leading to primacy effects), while in other cases, beliefs fade over time and rely primarily on later evidence (leading to recency effects). How and why either type of bias dominates in a given context is an important open question. Here, we study this question in classic perceptual decision-making tasks, where, puzzlingly, previous empirical studies differ in the kinds of biases they observe, ranging from primacy to recency, despite seemingly equivalent tasks. We present a new model, based on hierarchical approximate inference and derived from normative principles, that not only explains both primacy and recency effects in existing studies, but also predicts how the type of bias should depend on the statistics of stimuli in a given task. We verify this prediction in a novel visual discrimination task with human observers, finding that each observer’s temporal bias changed as the result of changing the key stimulus statistics identified by our model. The key dynamic that leads to a primacy bias in our model is an overweighting of new sensory information that agrees with the observer’s existing belief—a type of ‘confirmation bias’. By fitting an extended drift-diffusion model to our data we rule out an alternative explanation for primacy effects due to bounded integration. Taken together, our results resolve a major discrepancy among existing perceptual decision-making studies, and suggest that a key source of bias in human decision-making is approximate hierarchical inference.
1Human decisions are known to be systematically biased. A prominent example of such a bias 2 occurs when integrating a sequence of sensory evidence over time. Previous empirical studies di↵er 3 in the nature of the bias they observe, ranging from favoring early evidence (primacy), to favoring 4 late evidence (recency). Here, we present a unifying framework that explains these biases and 5 makes novel psychophysical and neurophysiological predictions. By explicitly modeling both the 6 approximate and the hierarchical nature of inference in the brain, we show that temporal biases 7 depend on the balance between "sensory information" and "category information" in the stimulus. 8Finally, we present new data from a human psychophysics task that confirm that temporal biases 9 can be robustly changed within subjects as predicted by our models. 10Imagine a doctor trying to infer the cause of a patient's symptoms from an x-ray image. Unsure 12 about the evidence in the image, she asks a radiologist for a second opinion. If she tells the 13 radiologist her suspicion, she may bias his report. If she does not, he may not detect a faint 14 diagnostic pattern. As a result, if the evidence in the image is hard to detect or ambiguous, 15 the radiologist's second opinion, and hence the final diagnosis, may be swayed by the doctor's 16 initial hypothesis. The problem faced by these doctors exemplifies the di culty of hierarchical 17 inference: each doctor's suspicion both informs and is informed by their collective diagnosis. If 18 they are not careful, their diagnosis may fall prey to circular reasoning. The brain faces a similar 19 problem during perceptual decision-making: any decision-making area combines sequential signals 20 from sensory brain areas, not directly from sensory input, just as the doctors' consensus is based 21 on their individual diagnoses rather than on the evidence per se. If sensory signals in the brain 22 themselves reflect inferences that combine both prior expectations and sensory evidence, we suggest 23 that this can then lead to an observable perceptual confirmation bias (Nickerson, 1998). 24 We formalize this idea in the context of approximate Bayesian inference and classic evidence-25 integration tasks in which a range of biases has been observed and for which a unifying explanation 26 1 is currently lacking. Evidence-integration tasks require subjects to categorize a sequence of inde-27 pendent and identically distributed (iid) draws of stimuli (Gold and Shadlen, 2007; Bogacz et al., 28 2006). Previous normative models of evidence integration hinge on two quantities: the amount of 29 information available on a single stimulus draw and the total number of draws. One might expect, 30 then, that temporal biases should have some canonical form in tasks where these quantities are 31 matched. However, existing studies are heterogeneous, reporting one of three distinct motifs: some 32 find that early evidence is weighted more strongly (a primacy e↵ect) (Kiani et al., 2008; Nienborg 33 and Cu...
The Bayesian Brain hypothesis, according to which the brain implements statistical algorithms, is one of the leading theoretical frameworks in neuroscience. There are two distinct underlying philosophies: one in which the brain recovers structures that exist in the world from sensory neural activity (decoding), and another in which it represents latent quantities in an internal model (encoding). We argue that an implicit disagreement on this point underlies much of the vigorous debate surrounding the neural implementation of statistical algorithms, in particular the difference between sampling-based and parametric distributional codes. To demonstrate the complementary nature of the two approaches, we have shown mathematically that encoding by sampling can be equivalently interpreted as decoding task variables in a manner consistent with linear probabilistic population codes (PPCs), a popular decoding approach. Ongoing research on the nature of Bayesian inference in the brain will benefit from making their philosophical stance explicit in order to avoid misunderstandings and false dichotomies.
With the recent trend of applying machine learning in every aspect of human life, it is important to incorporate fairness into the core of the predictive algorithms. We address the problem of predicting the quality of public speeches while being fair with respect to sensitive attributes of the speakers, e.g. gender and race. We use the TED talks as an input repository of public speeches because it consists of speakers from a diverse community and has a wide outreach. Utilizing the theories of Causal Models, Counterfactual Fairness and state-of-the-art neural language models, we propose a mathematical framework for fair prediction of the public speaking quality. We employ grounded assumptions to construct a causal model capturing how different attributes affect public speaking quality. This causal model contributes in generating counterfactual data to train a fair predictive model. Our framework is general enough to utilize any assumption within the causal model. Experimental results show that while prediction accuracy is comparable to recent work on this dataset, our predictions are counterfactually fair with respect to a novel metric when compared to true data labels. The FairyTED setup not only allows organizers to make informed and diverse selection of speakers from the unobserved counterfactual possibilities but it also ensures that viewers and new users are not influenced by unfair and unbalanced ratings from arbitrary visitors to the ted.com website when deciding to view a talk.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.