Weber's law states that the discriminability between two stimulus intensities depends only on their ratio. Despite its status as the cornerstone of psychophysics, the mechanisms underlying Weber's law are still debated, as no principled way exists to choose between its many proposed alternative explanations. We studied this problem training rats to discriminate the lateralization of sounds of different overall level. We found that the rats' discrimination accuracy in this task is level-invariant, consistent with Weber's law. Surprisingly, the shape of the reaction time distributions is also level-invariant, implying that the only behavioral effect of changes in the overall level of the sounds is a uniform scaling of time. Furthermore, we demonstrate that Weber's law breaks down if the stimulus duration is capped at values shorter than the typical reaction time. Together, these facts suggest that Weber's law is associated to a process of bounded evidence accumulation. Consistent with this hypothesis, we show that, among a broad class of sequential sampling models, the only robust mechanism consistent with reaction time scale-invariance is based on perfect accumulation of evidence up to a constant bound, Poisson-like statistics, and a power-law encoding of stimulus intensity. Fits of a minimal diffusion model with these characteristics describe the rats performance and reaction time distributions with virtually no error. Various manipulations of motivation were unable to alter the rats' psychometric function, demonstrating the stability of the just-noticeable-difference and suggesting that, at least under some conditions, the bound for evidence accumulation can set a hard limit on discrimination accuracy. Our results establish the mechanistic foundation of the process of intensity discrimination and clarify the factors that limit the precision of sensory systems.
Diffusion decision models (DDMs) are immensely successful models for decision making under uncertainty and time pressure. In the context of perceptual decision making, these models typically start with two input units, organized in a neuron–antineuron pair. In contrast, in the brain, sensory inputs are encoded through the activity of large neuronal populations. Moreover, while DDMs are wired by hand, the nervous system must learn the weights of the network through trial and error. There is currently no normative theory of learning in DDMs and therefore no theory of how decision makers could learn to make optimal decisions in this context. Here, we derive such a rule for learning a near-optimal linear combination of DDM inputs based on trial-by-trial feedback. The rule is Bayesian in the sense that it learns not only the mean of the weights but also the uncertainty around this mean in the form of a covariance matrix. In this rule, the rate of learning is proportional (respectively, inversely proportional) to confidence for incorrect (respectively, correct) decisions. Furthermore, we show that, in volatile environments, the rule predicts a bias toward repeating the same choice after correct decisions, with a bias strength that is modulated by the previous choice’s difficulty. Finally, we extend our learning rule to cases for which one of the choices is more likely a priori, which provides insights into how such biases modulate the mechanisms leading to optimal decisions in diffusion models.
Biological systems evolved to be functionally robust in uncertain environments, but also highly adaptable. Such robustness is partly achieved by genetic redundancy, where the failure of a specific component through mutation or environmental challenge can be compensated by duplicate components capable of performing, to a limited extent, the same function. Highly variable environments require very robust systems. Conversely, predictable environments should not place a high selective value on robustness. Here we test this hypothesis by investigating the evolutionary dynamics of genetic redundancy in extremely reduced genomes, found mostly in intracellular parasites and endosymbionts. By combining data analysis with simulations of genome evolution we show that in the extensive gene loss suffered by reduced genomes there is a selective drive to keep the diversity of protein families while sacrificing paralogy. We show that this is not a by-product of the known drivers of genome reduction and that there is very limited convergence to a common core of families, indicating that the repertoire of protein families in reduced genomes is the result of historical contingency and niche-specific adaptations. We propose that our observations reflect a loss of genetic redundancy due to a decreased selection for robustness in a predictable environment.
In standard models of perceptual decision-making, noisy sensory evidence is considered to be the primary source of choice errors and the accumulation of evidence needed to overcome this noise gives rise to speed-accuracy tradeoffs. Here, we investigated how the history of recent choices and their outcomes interacts with these processes using a combination of theory and experiment. We found that the speed and accuracy of performance of rats on olfactory decision tasks could be best explained by a Bayesian model that combines reinforcement-based learning with accumulation of uncertain sensory evidence. This model predicted the specific pattern of trial history effects that were found in the data. The results suggest that learning is a critical factor contributing to speed-accuracy tradeoffs in decision-making and that task history effects are not simply biases but rather the signatures of an optimal learning strategy.
In standard models of perceptual decision-making, noisy sensory evidence is considered to be the primary source of choice errors and the accumulation of evidence needed to overcome this noise gives rise to speed-accuracy tradeoffs. Here, we investigated how the history of recent choices and their outcomes interact with these processes using a combination of theory and experiment. We found that the speed and accuracy of performance of rats on olfactory decision tasks could be best explained by a Bayesian model that combines reinforcement-based learning with accumulation of uncertain sensory evidence. This model predicted the specific pattern of trial history effects that were found in the data. The results suggest that learning is a critical factor contributing to speed-accuracy tradeoffs in decision-making, and that task history effects are not simply biases but rather the signatures of an optimal learning strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.