2019
DOI: 10.3389/frobt.2019.00020
|View full text |Cite
|
Sign up to set email alerts
|

Simulating Active Inference Processes by Message Passing

Abstract: The free energy principle (FEP) offers a variational calculus-based description for how biological agents persevere through interactions with their environment. Active inference (AI) is a corollary of the FEP, which states that biological agents act to fulfill prior beliefs about preferred future observations (target priors). Purposeful behavior then results from variational free energy minimization with respect to a generative model of the environment with included target priors. However, manual derivations f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(38 citation statements)
references
References 42 publications
0
37
0
1
Order By: Relevance
“…One can interpret this as the agent having prior beliefs over states that it will visit, independent of a policy, but driving policy selection to these attractor states. An obvious example of a preferred state is for example maintaining a temperature of 37 °C (Van De Laar and De Vries, 2019 ). These preferred states basically determine the agent, and can be endowed on the agent either by nature through evolution, in the case of natural agents, or by humans in the case of artificial agents.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…One can interpret this as the agent having prior beliefs over states that it will visit, independent of a policy, but driving policy selection to these attractor states. An obvious example of a preferred state is for example maintaining a temperature of 37 °C (Van De Laar and De Vries, 2019 ). These preferred states basically determine the agent, and can be endowed on the agent either by nature through evolution, in the case of natural agents, or by humans in the case of artificial agents.…”
Section: Methodsmentioning
confidence: 99%
“…Current active inference schemes usually start by specifying the belief state space manually, in terms of a generative model. Variational Bayes is then used to infer hidden states and parameters under this model (Friston et al, 2009 ; Sajid et al, 2019 ; Van De Laar and De Vries, 2019 ). This approach works well for low-dimensional problems or problems where a sensible belief state can be devised for the task at hand.…”
Section: Methodsmentioning
confidence: 99%
“…In summary, we can interpret the dynamics of a system described by mean-field density dynamics in terms of messages (i.e., mean-fields) passed between module-like regions of a network [ 69 , 70 , 71 ]. For sufficiently sparse conditional dependency structures—like that of the Hamiltonian employed here—the message passing is evocative of synaptic communication in sparse neuronal networks.…”
Section: Neuronal Message Passingmentioning
confidence: 99%
“…With these preliminaries in place, we can start to think about the implications this has for the neural architectures that solve particular types of inference problem. Figure 1 shows a simple graphical representation of a generative model and an interpretation of the computations required to find posterior distributions in terms of the passing of local messages [ 11 , 12 , 24 ]. The graphical representation on the right shows the dependencies implicit in the Bayes optimal updating of beliefs.…”
Section: Graphical Models and Inferencementioning
confidence: 99%
“…However, our primary focus here is on characterising priors for policy selection, and we will gloss over the details of these inference schemes and assume exact Bayesian inference is tractable. While there may be subtle differences resulting from the application of different message-passing schemes [ 11 , 24 , 28 ], this will not influence the computational anatomy at the level of description adopted in this paper, which rests upon conditional dependencies or Markov blankets in a generative model. Markov blankets are statistical constructs that partition sets of random variables.…”
Section: Graphical Models and Inferencementioning
confidence: 99%