2013
DOI: 10.1016/j.neuron.2013.08.009
|View full text |Cite
|
Sign up to set email alerts
|

Disruption of Dorsolateral Prefrontal Cortex Decreases Model-Based in Favor of Model-free Control in Humans

Abstract: SummaryHuman choice behavior often reflects a competition between inflexible computationally efficient control on the one hand and a slower more flexible system of control on the other. This distinction is well captured by model-free and model-based reinforcement learning algorithms. Here, studying human subjects, we show it is possible to shift the balance of control between these systems by disruption of right dorsolateral prefrontal cortex, such that participants manifest a dominance of the less optimal mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

24
223
3

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 231 publications
(250 citation statements)
references
References 32 publications
(49 reference statements)
24
223
3
Order By: Relevance
“…This process hinders adaptation to changes in the environment, but has advantageous computational simplicity. Previous studies show distinct behavioral and neurobiological signatures of both systems (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Furthermore, consistent with the theoretical strengths and weaknesses of each system (2,19), different experimental conditions influence the relative contributions of the two systems in controlling behavior according to their respective competencies (20)(21)(22)(23).…”
supporting
confidence: 63%
See 1 more Smart Citation
“…This process hinders adaptation to changes in the environment, but has advantageous computational simplicity. Previous studies show distinct behavioral and neurobiological signatures of both systems (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18). Furthermore, consistent with the theoretical strengths and weaknesses of each system (2,19), different experimental conditions influence the relative contributions of the two systems in controlling behavior according to their respective competencies (20)(21)(22)(23).…”
supporting
confidence: 63%
“…Previous studies of planning have used shallow tasks (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(20)(21)(22)(23) and have found evidence for the two extreme values of k. Rather than this dichotomous dependence on either goaldirected or habitual systems, we hypothesize that individuals use an integrative plan-until-habit system for decision making with intermediate values of k. We further hypothesize that the choice of k is a covert internal decision that is influenced by the availability of cognitive resources.…”
mentioning
confidence: 89%
“…In a similar vein, dopamine is implicated in a modulation of PFC maintenance processes via a gating of cortical gain, rendering coding of relevant environmental information more robust against noise (11,26,27). Indeed, the importance of lateral PFC for model-based inference is supported by findings that theta-burst transcranial magnetic stimulation compromises model-based control in humans (28). Our analysis of second-stage reaction times, which were affected by the state transition matrix, showed that a response time difference for rare versus common states was positively related to a bias toward more model-based choices.…”
Section: Discussionmentioning
confidence: 97%
“…In the most salient work along these lines, reward-based tree search has been conceptualized in terms provided by modelbased reinforcement learning, a computational framework in which reward-based decisions are based on an explicit model of the choice problem, a "cognitive map" of the decision tree itself (11). Under this rubric, recent work has illuminated several aspects of rewardbased tree search, providing an indication of how representations of decision problems are acquired and updated (12)(13)(14), where in the brain relevant quantities (e.g., cumulative rewards) are represented (15-17), and how model-based decision making interacts with simpler, habit-based choice mechanisms (15,(18)(19)(20)(21)(22).Despite such advances, however, comparatively little progress has so far been made toward characterizing the concrete process by which model-based decisions are reached, that is, the actual procedure through which a representation of the decision problem is translated into a choice (9,10,23). This situation contrasts sharply with what one finds in the literature on simple choice, where a number of detailed process models have been proposed.…”
mentioning
confidence: 99%
“…In the most salient work along these lines, reward-based tree search has been conceptualized in terms provided by modelbased reinforcement learning, a computational framework in which reward-based decisions are based on an explicit model of the choice problem, a "cognitive map" of the decision tree itself (11). Under this rubric, recent work has illuminated several aspects of rewardbased tree search, providing an indication of how representations of decision problems are acquired and updated (12)(13)(14), where in the brain relevant quantities (e.g., cumulative rewards) are represented (15-17), and how model-based decision making interacts with simpler, habit-based choice mechanisms (15,(18)(19)(20)(21)(22).…”
mentioning
confidence: 99%