2013
DOI: 10.1016/j.jmp.2013.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Constraining bridges between levels of analysis: A computational justification for locally Bayesian learning

Abstract: Different levels of analysis provide different insights into behavior: computationallevel analyses determine the problem an organism must solve and algorithmiclevel analyses determine the mechanisms that drive behavior. However, many attempts to model behavior are pitched at a single level of analysis. Research into human and animal learning provides a prime example, with some researchers using computational-level models to understand the sensitivity organisms display to environmental statistics but other rese… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 58 publications
(90 reference statements)
0
9
0
Order By: Relevance
“…Kurzban, Duckworth, Kable, & Myers, ). Although the rational process models presented above were inspired by sampling algorithms, resource‐rational analysis can also leverage variational inference (Gershman & Wilson, ; Sanborn & Silva, ) and other approximation algorithms.…”
Section: Relation To Previous Workmentioning
confidence: 99%
“…Kurzban, Duckworth, Kable, & Myers, ). Although the rational process models presented above were inspired by sampling algorithms, resource‐rational analysis can also leverage variational inference (Gershman & Wilson, ; Sanborn & Silva, ) and other approximation algorithms.…”
Section: Relation To Previous Workmentioning
confidence: 99%
“…The second family, variational algorithms, approximate the posterior with a simpler parameterized form that is easier to optimize. Variational algorithms have figured prominently in neuroscience, where they underpin the free-energy principle (Friston, 2009 ), and have also been proposed as psychologically plausible process models (Sanborn and Silva, 2013 ; Dasgupta et al, 2019 ). These algorithms are often much more efficient compared to Monte Carlo, which is why they are widely used in machine learning.…”
Section: Generative Models: Explicit and Implicitmentioning
confidence: 99%
“…The issues raised in this paper for models of visual perception also have implications for Bayesian models of cognition, where ideas related to sampling (Vul and Rich, 2010; Sanborn et al, 2010; Lieder et al, 2014; Vul et al, 2014; Sanborn and Chater, 2016; Lieder et al, 2017; Zhu et al, 2020), variational inference (Hohwy et al, 2008; Daw et al, 2008; Sanborn and Silva, 2013), or both (Lange et al, 2021) have been invoked to explain a wide variety of heuristics and biases (reviewed in Sanborn (2015); Griffiths et al (2012b)). Here, too, it is important to distinguish between probabilistic models of the world that are posited to exist in a subject’s mind (as is typical in Bayesian Encoding) from experimenter-defined models of a particular task (as is typical in Bayesian Decoding).…”
Section: Discussionmentioning
confidence: 97%