Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models—namely, “actor/critic” models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning.
Visual "pop-out" occurs when a unique visual target (e.g. a feature singleton) is present among a set of homogeneous distractors. However, the role of visual awareness in this process remains unclear. Here we show that, even though subjects were not aware of a suppressed pop-out display, their subsequent performance on an orientation discrimination task was significantly better at the pop-out location than at a control location. These results indicate that visual awareness of a feature singleton is not necessary for it to attract attention. Furthermore, our results show that the subliminal pop-out effect disappeared when subjects diverted their attention toward an RSVP task while viewing the same subliminal pop-out display, suggesting that the availability of top-down attention is necessary for the subliminal pop-out effect, and that the cognitive processes underlying attention and awareness are somewhat independent.
Visual “pop-out” occurs when a unique visual target (e.g. a feature singleton) is present among a set of homogeneous distractors. However, the role of visual awareness in this process remains unclear. Here we show that, even though subjects were not aware of a suppressed pop-out display, their subsequent performance on an orientation discrimination task was significantly better at the pop-out location than at a control location. These results indicate that visual awareness of a feature singleton is not necessary for it to attract attention. Furthermore, our results show that the subliminal pop-out effect disappeared when subjects diverted their attention toward an RSVP task while viewing the same subliminal pop-out display, suggesting that the availability of top-down attention is necessary for the subliminal pop-out effect, and that the cognitive processes underlying attention and awareness are somewhat independent.
The model-free algorithms of "reinforcement learning" (RL) have gained clout across disciplines, but so too have model-based alternatives. The present study emphasizes other dimensions of this model space in consideration of associative or discriminative generalization across states and actions. This "generalized reinforcement learning" (GRL) model, a frugal extension of RL, parsimoniously retains the single rewardprediction error (RPE), but the scope of learning goes beyond the experienced state and action. Instead, the generalized RPE is efficiently relayed for bidirectional counterfactual updating of value estimates for other representations. Aided by structural information but as an implicit rather than explicit cognitive map, GRL provided the most precise account of human behavior and individual differences in a reversallearning task with hierarchical structure that encouraged inverse generalization across both states and actions. Reflecting inference that could be true, false (i.e., overgeneralization), or absent (i.e., undergeneralization), state generalization distinguished those who learned well more so than action generalization. With highresolution high-field fMRI targeting the dopaminergic midbrain, the GRL model's RPE signals (alongside value and decision signals) were localized within not only the striatum but also the substantia nigra and the ventral tegmental area, including specific effects of generalization that also extend to the hippocampus. Factoring in generalization as a multidimensional process in value-based learning, these findings shed light on complexities that, while challenging classic RL, can still be resolved within the bounds of its core computations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.