2017
DOI: 10.31234/osf.io/rzj2m
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards a unifying theory of generalization

Abstract: How do humans generalize from observed to unobserved data? How does generalization support inference, prediction, and decision making? I propose that a big part of human generalization can be explained by a powerful mechanism of function learning. I put forward and assess Gaussian Process regression as a model of human function learning that can unify several psychological theories of generalization. Across 14 experiments and using extensive computational modeling, I show that this model generates testable pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 131 publications
(183 reference statements)
0
2
0
Order By: Relevance
“…Six of these are Bayesian models that vary along three dimensions: first, whether or not they can generalize learned values from one option to the other, second, whether or not they use uncertainty-guided exploration strategy, and third, whether or not they can compose the learned values from the first two sub-task to reason on the final sub-task. The Bayesian models include a Bayesian mean-tracker (BMT) 36 which is a model that does not learn about the underlying functional structure but instead updates its beliefs about rewards for each option independently, as well as a model that learns functions by generalizing across options within a sub-task based on the idea of Gaussian Process regression (GPR) 18,37,38 . For each of these two models, we considered one variant that cannot compose and instead learns separate reward functions for each sub-task, another that does not perform uncertainty-guided exploration, and lastly, one that initializes its predictions in the final sub-task to the composition of the learned means from the first two sub-tasks.…”
Section: Model-based Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Six of these are Bayesian models that vary along three dimensions: first, whether or not they can generalize learned values from one option to the other, second, whether or not they use uncertainty-guided exploration strategy, and third, whether or not they can compose the learned values from the first two sub-task to reason on the final sub-task. The Bayesian models include a Bayesian mean-tracker (BMT) 36 which is a model that does not learn about the underlying functional structure but instead updates its beliefs about rewards for each option independently, as well as a model that learns functions by generalizing across options within a sub-task based on the idea of Gaussian Process regression (GPR) 18,37,38 . For each of these two models, we considered one variant that cannot compose and instead learns separate reward functions for each sub-task, another that does not perform uncertainty-guided exploration, and lastly, one that initializes its predictions in the final sub-task to the composition of the learned means from the first two sub-tasks.…”
Section: Model-based Analysismentioning
confidence: 99%
“…Empirical studies have demonstrated that people have an inherent predisposition towards compositional patterns 7,[18][19][20][21][22][23] . For example, utilizing the function learning paradigm, which involves the learning, completion, and prediction of functional patterns, Schulz and colleagues 18,19 have demonstrated that humans find it easier to learn about compositional than non-compositional patterns. Furthermore, they showed that humans exhibit superior abilities to complete and predict compositional functions, as well as an enhanced capacity for remembering such functions 20,21 .…”
Section: Introductionmentioning
confidence: 99%