2019
DOI: 10.1101/644534
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A theory of learning to infer

Abstract: Bayesian theories of cognition assume that people can integrate probabilities rationally.However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people under-react to prior probabilities (base rate neglect), other studies find that people under-react to the likelihood of the data (conservatism). We argue that these deviat… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
27
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 23 publications
(28 citation statements)
references
References 148 publications
1
27
0
Order By: Relevance
“…More relevant to our work is the approach of Dasgupta et al (2020), who taught neural networks to approximate Bayesian inference, given some information about an inference problem's prior and likelihood. Restricting the size of the network allowed them to account for a large amount of cognitive biases, including base rate neglect and conservatism.…”
Section: Resource Rationalitymentioning
confidence: 99%
See 1 more Smart Citation
“…More relevant to our work is the approach of Dasgupta et al (2020), who taught neural networks to approximate Bayesian inference, given some information about an inference problem's prior and likelihood. Restricting the size of the network allowed them to account for a large amount of cognitive biases, including base rate neglect and conservatism.…”
Section: Resource Rationalitymentioning
confidence: 99%
“…This approach shares its core principles with our theory: resource rationality and meta-learning. However, BMLI does not approximate Bayesian inference explicitly as done by Dasgupta et al (2020). Instead, it attempts to infer distributions that are optimal for making future predictions (which may or may not correspond to Bayesian inference).…”
Section: Resource Rationalitymentioning
confidence: 99%
“… 1 Another family of approximation methods, known as variational Bayes ( Blei & Jordan, 2006 ; Blei, Kucukelbir, & McAuliffe, 2017 ), optimizes an approximate, simplified model of the probability distribution of interest, rather than working with a sample from that distribution. This approach may also be the starting point for neuroscientific and psychological hypotheses, although we do not consider it further here ( Dasgupta, Schulz, Tenenbaum, & Gershman, 2019 ; Gershman & Beck, 2017 ; Ma, Beck, Latham, & Pouget, 2006 ; Sanborn, 2017 ). …”
mentioning
confidence: 99%
“…Thus, consumers make approximations about the most essential features of the environment and then compare the data to their mental model according to the Bayesian principle. At least in part, the Bayesian approach solves the curse of dimensionality ( Dasgupta et al, 2020 ) if it is assumed that inductive inference addresses and learns the most important aspects of the individuals’ environment. In this sense, cultural habits, social norms, and attitudes play an important role in consumer decision-making.…”
Section: Consumers Control Contexts By Expectationsmentioning
confidence: 99%