2015
DOI: 10.1016/j.cobeha.2015.08.009
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning, efficient coding, and the statistics of natural tasks

Abstract: The application of ideas from computational reinforcement learning has recently enabled dramatic advances in behavioral and neuroscientific research. For the most part, these advances have involved insights concerning the algorithms underlying learning and decision making. In the present article, we call attention to the equally important but relatively neglected question of how problems in learning and decision making are internally represented. To articulate the significance of representation for reinforceme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
39
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(39 citation statements)
references
References 54 publications
0
39
0
Order By: Relevance
“…We predict that a number of other high-and mid-level object and scene properties are linked to these statistics, which would be consistent with the hypothesis that cortical tuning to statistical regularities produces representations that are informative to a broad range of stimulus properties. Similar statistical principles may underlie the cortical representations that support other high-level cognitive functions, such as object categorization 57 , face recognition 58 , and reinforcement learning 59 .…”
Section: Natural Statistics and Cortical Representationmentioning
confidence: 90%
“…We predict that a number of other high-and mid-level object and scene properties are linked to these statistics, which would be consistent with the hypothesis that cortical tuning to statistical regularities produces representations that are informative to a broad range of stimulus properties. Similar statistical principles may underlie the cortical representations that support other high-level cognitive functions, such as object categorization 57 , face recognition 58 , and reinforcement learning 59 .…”
Section: Natural Statistics and Cortical Representationmentioning
confidence: 90%
“…Our results add to a growing literature on structure learning , extending its principles to the domain of cognitive control policies (Braun, Mehring, & Wolpert, 2010; Collins & Frank, 2013; Gershman, Blei, & Niv, 2010; Huys et al, 2015; Rougier, Noelle, Braver, Cohen, & O’Reilly, 2005). Structure learning refers to our ability to identify and leverage invariant structure in the space of natural tasks to improve learning of novel tasks (Botvinick et al, 2009, 2015; Gershman & Niv, 2010; Tenenbaum, Kemp, Griffiths, & Goodman, 2011). …”
Section: Discussionmentioning
confidence: 99%
“…Just like different real-world tasks often share stimulus-response-outcome contingencies, they also share other forms dynamic structure (Botvinick, Weinstein, Solway, & Barto, 2015; Schank & Abelson, 1977). Such shared structure affords an opportunity for generalization of internal control policies.…”
Section: Introductionmentioning
confidence: 99%
“…It is well known in fields like AI and RL that finding appropriate task decomposition is key to solving complex problems [4][5][6]. The contribution of this paper to the literature is identifying a mapping between different control problems and corresponding information measures, all of which can be used within a unitary method to derive task-relevant decompositions or clusters.…”
Section: Discussionmentioning
confidence: 99%
“…In artificial intelligence (AI), reinforcement learning (RL) and related areas, it has been long known that learning to code the problem space efficiently is paramount to finding efficient solutions [4][5][6]. While in several practical AI and RL applications, the so-called state-space is provided by programmers by hand, for example in the form of a grid of discrete cells that cover the whole space, it is not guaranteed that this is the most efficient representation to solve the problem at hand.…”
Section: Introductionmentioning
confidence: 99%