2019
DOI: 10.1101/860478
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transferring structural knowledge across cognitive maps in humans and models

Abstract: Relations between task elements often follow hidden underlying structural forms such as periodicities or hierarchies, whose inferences fosters performance. However, transferring structural knowledge to novel environments requires flexible representations that are generalizable over particularities of the current environment, such as its stimuli and size. We suggest that humans represent structural forms as abstract basis sets and that in novel tasks, the structural form is inferred and the relevant basis set i… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…A truly all-encompassing model of generalization should capture transfer across domains and structural changes. Even though several recent studies have advanced our understanding of how people transfer knowledge across graph structures [ 80 ], state similarities in multi-task reinforcement learning [ 95 ], and target hypotheses supporting generalization [ 90 ], whether or not all of these recruit the same computational principles and neural machinery remains to be seen.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…A truly all-encompassing model of generalization should capture transfer across domains and structural changes. Even though several recent studies have advanced our understanding of how people transfer knowledge across graph structures [ 80 ], state similarities in multi-task reinforcement learning [ 95 ], and target hypotheses supporting generalization [ 90 ], whether or not all of these recruit the same computational principles and neural machinery remains to be seen.…”
Section: Discussionmentioning
confidence: 99%
“…Previous work has also investigated transfer across domains [80], where inferences about the transition structure in one task can be generalized to other tasks. Whereas we used identical transition structures in both tasks, we nevertheless found asymmetric transfer between domains.…”
Section: Related Workmentioning
confidence: 99%
“…This suggests that our hypertransform procedure may rely on a common coding strategy that transforms grid cell activity from one (part of an) environment to another in a predictable manner (e.g. by aligning to a geometric axis in the environment), a possibility supported by recent work 32,33 . Similarly, a related possible cause of a shared representational geometry is input from the head direction (HD) system, which is expected to be "similarly different" across animals for the initial parts of the left and right trials in our data.…”
Section: Discussionmentioning
confidence: 93%
“…In contrast, here we observed that exposure to a single task allowed our participants to improve their knowledge of the task space. Second, while previous studies either focused on learning of multiple simple tasks (Kattner et al, 2017; Schulz et al, 2020), or learning of a single complex graph-like structure (Cleeremans & McClelland, 1991; Garvert et al, 2017; Schapiro et al, 2013), we focused on how humans learn multiple complex structures, an issue which had not been looked at until very recently (Mark et al, 2020; Wu et al, 2019). Third, studies typically focus on the consequences of what is being transferred, for instance, whether there is an immediate benefit to performance or a change in the rate of learning (Braun et al, 2010; Kattner et al, 2017); our paradigm gives us additional insights into the content that is being transferred, specifically, the pool of candidate models that are used for explaining the task increasingly matches the true pool of possible models within the paradigm.…”
Section: Discussionmentioning
confidence: 99%