2020
DOI: 10.1371/journal.pcbi.1007805
|View full text |Cite
|
Sign up to set email alerts
|

Learning action-oriented models through active inference

Abstract: Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn onl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
59
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 79 publications
(61 citation statements)
references
References 69 publications
1
59
0
1
Order By: Relevance
“…A first novelty with respect to the previous models implementing planning as inference based on brain-like mechanisms [38][39][40] is that our architecture proposes an hypothesis on how organisms might learn the world model while using it for planning. This is a key challenge for planning, as recently highlighted in [46]. The challenge is different from the exploration/exploitation issue in model-free models [4], and requires arbitration mechanisms different from the classic ones used to balance goal-directed and habitual processes [47,48].…”
Section: Discussion Of the General Features Of The Modelmentioning
confidence: 99%
See 3 more Smart Citations
“…A first novelty with respect to the previous models implementing planning as inference based on brain-like mechanisms [38][39][40] is that our architecture proposes an hypothesis on how organisms might learn the world model while using it for planning. This is a key challenge for planning, as recently highlighted in [46]. The challenge is different from the exploration/exploitation issue in model-free models [4], and requires arbitration mechanisms different from the classic ones used to balance goal-directed and habitual processes [47,48].…”
Section: Discussion Of the General Features Of The Modelmentioning
confidence: 99%
“…The model-free literature on reinforcement learning [4] studies the important problem of the exploration-exploitation trade-off where an agent must decide whether to take random actions to explore the environment and learn the policies that lead to rewards, or to exploit those policies to maximize rewards. A problem less studied involves a situation where model-based/goal-directed agents have to face an analogous but different trade-off [44][45][46]. In particular, when these agents solve new tasks they have to decide if exploring to refine the world model, or if exploiting such model to plan and act.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…From this theoretical perspective, perceptual experience is influenced not only by passive predictions about the world, but more generally by predictions encompassing the coupling or contingency between actions and sensory signals -i.e., on predictions about sensorimotor contingencies (Clark, 2015;O'Regan J.K & Noë, 2001;Seth, 2014). According to this view, the predictions that best suppress PEs are not necessarily those which are the most veridical, but those which best support adaptive interactions with the world (Clark, 2015(Clark, , 2016Seth, 2014Seth, , 2015Tschantz, Seth, & Buckley, 2020). In this light, action emerges as not just an output, but as an integral part of our experience of the world.…”
Section: Introductionmentioning
confidence: 99%