2022
DOI: 10.1126/science.abq6740
|View full text |Cite
|
Sign up to set email alerts
|

Mesolimbic dopamine release conveys causal associations

Abstract: Learning to predict rewards based on environmental cues is essential for survival. It is believed that animals learn to predict rewards by updating predictions whenever the outcome deviates from expectations, and that such reward prediction errors (RPEs) are signaled by the mesolimbic dopamine system—a key controller of learning. However, instead of learning prospective predictions from RPEs, animals can infer predictions by learning the retrospective cause of rewards. Hence, whether mesolimbic dopamine instea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

17
188
6

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 122 publications
(211 citation statements)
references
References 96 publications
17
188
6
Order By: Relevance
“…Indeed, DA encodes causal associations regardless of the unconditioned stimulus (i.e. rewarding or aversive), at least in regions such as the nucleus accumbens core (Jeong et al, 2022) but also systemically (Roughley et al, 2021). But how information from these DA neuron types converges to assign and continuously update reward and/or fear 'value' is unknown.…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, DA encodes causal associations regardless of the unconditioned stimulus (i.e. rewarding or aversive), at least in regions such as the nucleus accumbens core (Jeong et al, 2022) but also systemically (Roughley et al, 2021). But how information from these DA neuron types converges to assign and continuously update reward and/or fear 'value' is unknown.…”
Section: Discussionmentioning
confidence: 99%
“…While phasic dopamine responses have long been assumed to encode a scalar prediction error value as calculated by a single temporal difference learning agent (Montague et al, 1996;Schultz et al, 1997), the growing wealth of studies showing the existence of multiple simultaneous prediction error call for a reinterpretation of the role of dopamine in signaling prediction errors (Langdon et al, 2018). In line with this proposal, recently published findings suggest that dopamine function might actually be better understood as retrospective causal inference (i.e., credit assignment), contesting the prospective temporal difference reinforcement learning framework (Jeong et al, 2022). While this work calls the prevailing reward prediction error hypothesis of mesolimbic dopamine into question, its proposed model is highly consistent with and reliant on the notion of surprise to solve the (temporal) credit assignment problem.…”
Section: Studies Investigating the Distinction Between Model-free And...mentioning
confidence: 96%
“…Historically, dopamine prediction errors have been thought to contribute to cue-reward learning by endowing cues with a model-free, scalar value [29][30][31][32][33][34] . However, recent studies have revealed that these phasic signals can also support the development of model-based associations [36][37][38][39][40][41][42][43][44][45][46][47] . For example, VTADA neurons are necessary for the development of associations between cues and sensory-specific representations of rewards 38,40 .…”
Section: Confirming the Presence Of A Novel Dopaminergic Projection F...mentioning
confidence: 99%
“…Further, the wider circuity supporting reward learning in the LH is unclear. One candidate for this is input from dopamine neurons in the ventral tegmental area (VTA), which we now understand to be capable of supporting both model-free and model-based associative learning [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47] . However, there are very few studies that have suggested the presence of a projection from dopamine neurons to VTA 48,49 , and none that explore the function of this projection.…”
Section: Introductionmentioning
confidence: 99%