Deciding between stimuli requires combining their learned value with one's sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behavior conformed to a model that combines signal detection with reinforcement learning. In the model, the predicted value of the chosen option is the product of sensory confidence and learned value. We found precise correlates of this variable in the pre-outcome activity of midbrain dopamine neurons and of medial prefrontal cortical neurons. However, only the latter played a causal role: inactivating medial prefrontal cortex before outcome strengthened learning from the outcome. Dopamine neurons played a causal role only after outcome, when they encoded reward prediction errors graded by confidence, influencing subsequent choices. These results reveal neural signals that combine reward value with sensory confidence and guide subsequent learning.
Making efficient decisions requires combining present sensory evidence with previous reward values, and learning from the resulting outcome. To establish the underlying neural processes, we trained mice in a task that probed such decisions. Mouse choices conformed to a reinforcement learning model that estimates predicted value (reward value times sensory confidence) and prediction error (outcome minus predicted value). Predicted value was encoded in the pre-outcome activity of prelimbic frontal neurons and midbrain dopamine neurons. Prediction error was encoded in the post-outcome activity of dopamine neurons, which reflected not only reward value but also sensory confidence. Manipulations of these signals spared ongoing choices but profoundly affected subsequent learning. Learning depended on the pre-outcome activity of prelimbic neurons, but not dopamine neurons. Learning also depended on the post-outcome activity of dopamine neurons, but not prelimbic neurons. These results reveal the distinct roles of frontal and dopamine neurons in learning under uncertainty.
The striatum plays critical roles in visually-guided decision making and receives dense axonal projections from midbrain dopamine neurons. However, the roles of striatal dopamine in visual decision making are poorly understood. We trained male and female mice to perform a visual decision task with asymmetric reward payoff, and we recorded the activity of dopamine axons innervating striatum. Dopamine axons in the dorsomedial striatum (DMS) responded to contralateral visual stimuli and contralateral rewarded actions. Neural responses to contralateral stimuli could not be explained by orienting behavior such as eye movements. Moreover, these contralateral stimulus responses persisted in sessions where the animals were instructed to not move to obtain reward, further indicating that these signals are stimulus-related. Lastly, we show that DMS dopamine signals were qualitatively different from dopamine signals in the ventral striatum, which responded to both ipsi- and contralateral stimuli, conforming to canonical prediction error signaling under sensory uncertainty. Thus, during visual decisions, DMS dopamine encodes visual stimuli and rewarded actions in a lateralized fashion, and could facilitate associations between specific visual stimuli and actions.
Midbrain dopamine neurons play key roles in decision-making by regulating reward valuation and actions. These roles are thought to depend on dopamine neurons innervating striatum. In addition to actions and rewards, however, efficient decisions often involve consideration of uncertain sensory signals. The functions of striatal dopamine during sensory decisions remains unknown. We trained mice in a task that probed decisions based on sensory evidence and reward value, and recorded the activity of striatal dopamine axons. Dopamine axons in ventral striatum (VS) responded to bilateral stimuli and trial outcomes, encoding prediction errors that scaled with decision confidence and reward value. By contrast, dopamine axons in dorsal striatum (DS) responded to contralateral stimuli and contralateral actions. Thus, during sensory decisions, striatal dopamine signals are anatomically organized. VS dopamine resembles prediction errors suitable for reward maximization under sensory uncertainty whereas DS dopamine encodes specific combinations of stimuli and actions in a lateralized fashion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.