Human subjects differentially weight different stimuli in averaging tasks. This has been interpreted as reflecting biased stimulus encoding, but an alternative hypothesis is that stimuli are encoded with noise, then optimally decoded. Moreover, with efficient coding, the amount of noise should vary across stimulus space, and depend on the statistics of stimuli. We investigate these predictions through a task in which participants are asked to compare the averages of two series of numbers, each sampled from a prior distribution that differs across blocks of trials. We show that subjects encode numbers with both a bias and a noise that depend on the number. Infrequently occurring numbers are encoded with more noise. A maximum-likelihood decoding model captures subjects' behaviour and indicates efficient coding. Finally, our model predicts a relation between the bias and variability of estimates, thus providing a statistically-founded, parsimonious derivation of Wei and Stocker's "law of human perception".In many decision problems, someone is presented with an array of variables that must be aggregated in order to identify the optimal decision. How humans combine several sources of information in their decisionmaking process is a long-standing debate in economics and cognitive science [1,2,3,4,5]. Recently, a series of experimental studies have focused on averaging tasks, in which subjects are presented with several stimuli (sometimes numbers, but sometimes visual stimuli characterized by their length, orientation, shape, or color) and asked to make a decision about the average magnitude of the presented stimuli [6,7,8,9]. Although the contribution of each stimulus to the average should, in theory, be proportional to its true magnitude, the weights attributed to stimuli by human subjects, in their decisions, appear to be nonlinear functions of their magnitudes. Subjects asked to compare the averages of two series of digits, for instance, overweight larger digits when making a decision [6]. What is the origin of this seemingly suboptimal behaviour? Refs. [6,7] show that if comparison of the average encoded values involves noise, then a nonlinear transformation of the presented stimuli can partially compensate for the performance loss induced by the noise. The nonlinear
To make informed decisions in natural environments that change over time, humans must update their beliefs as new observations are gathered. Studies exploring human inference as a dynamical process that unfolds in time have focused on situations in which the statistics of observations are history-independent. Yet temporal structure is everywhere in nature, and yields history-dependent observations. Do humans modify their inference processes depending on the latent temporal statistics of their observations? We investigate this question experimentally and theoretically using a change-point inference task. We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli. As such, humans behave qualitatively in a Bayesian fashion, but, quantitatively, deviate away from optimality. Perhaps more importantly, humans behave suboptimally in that their responses are not deterministic, but variable. We show that this variability itself is modulated by the temporal statistics of stimuli. To elucidate the cognitive algorithm that yields this behavior, we investigate a broad array of existing and new models that characterize different sources of suboptimal deviations away from Bayesian inference. While models with 'output noise' that corrupts the response-selection process are natural candidates, human behavior is best described by sampling-based inference models, in which the main ingredient is a compressed approximation of the posterior, represented through a modest set of random samples and updated over time. This result comes to complement a growing literature on sample-based representation and learning in humans.
Human subjects differentially weight different stimuli in averaging tasks. This has been interpreted as reflecting biased stimulus encoding, but an alternative hypothesis is that stimuli are encoded with noise, then optimally decoded. Moreover, with efficient coding, the amount of noise should vary across stimulus space, and depend on the statistics of stimuli. We investigate these predictions through a task in which participants are asked to compare the averages of two series of numbers, each sampled from a prior distribution that differs across blocks of trials. We show that subjects encode numbers with both a bias and a noise that depend on the number. Infrequently occurring numbers are encoded with more noise. A model combining efficient coding and Bayesian decoding best captures subjects' behaviour. Our results suggest that Wei and Stocker's "law of human perception", which relates the bias and variability of sensory estimates, also applies to number cognition.
To make informed decisions in natural environments that change over time, humans must update their beliefs as new observations are gathered. Studies exploring human inference as a dynamical process that unfolds in time have focused on situations in which the statistics of observations are history-independent. Yet temporal structure is everywhere in nature, and yields history-dependent observations. Do humans modify their inference processes depending on the latent temporal statistics of their observations? We investigate this question experimentally and theoretically using a change-point inference task. We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli. As such, humans behave qualitatively in a Bayesian fashion, but, quantitatively, deviate away from optimality. Perhaps more importantly, humans behave suboptimally in that their responses are not deterministic, but variable. We show that this variability itself is modulated by the temporal statistics of stimuli. To elucidate the cognitive algorithm that yields this behavior, we investigate a broad array of existing and new models that characterize different sources of suboptimal deviations away from Bayesian inference. While models with 'output noise' that corrupts the responseselection process are natural candidates, human behavior is best described by sampling-based inference models, in which the main ingredient is a compressed approximation of the posterior, represented through a modest set of random samples and updated over time. This result comes to complement a growing literature on sample-based representation and learning in humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.