2020
DOI: 10.1101/2020.11.11.378141
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A model of egocentric to allocentric understanding in mammalian brains

Abstract: In the mammalian brain, allocentric representations support efficient self-location and flexible navigation. A number of distinct populations of these spatial responses have been identified but no unified function has been shown to account for their emergence. Here we developed a network, trained with a simple predictive objective, that was capable of mapping egocentric information into an allocentric spatial reference frame. The prediction of visual inputs was sufficient to drive the appearance of spatial rep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
39
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 35 publications
(40 citation statements)
references
References 73 publications
1
39
0
Order By: Relevance
“…Notably, these models focused on the description of individual fields but were agnostic of population-level interactions which featured in models that emphasised CA3 recurrence and mutual inhibition (Káli and Dayan, 2000;Samsonovich and McNaughton, 1997). Thus, our current results indicate that a synthesis of both approaches is important for understanding how the hippocampus represents large-scale spaces -yoking the evolution of statistically stable population-level activity to movement through visual states (Hedrick and Zhang, 2016;Uria et al, 2020).…”
Section: Discussionmentioning
confidence: 85%
See 1 more Smart Citation
“…Notably, these models focused on the description of individual fields but were agnostic of population-level interactions which featured in models that emphasised CA3 recurrence and mutual inhibition (Káli and Dayan, 2000;Samsonovich and McNaughton, 1997). Thus, our current results indicate that a synthesis of both approaches is important for understanding how the hippocampus represents large-scale spaces -yoking the evolution of statistically stable population-level activity to movement through visual states (Hedrick and Zhang, 2016;Uria et al, 2020).…”
Section: Discussionmentioning
confidence: 85%
“…In contrast, models based on attractor dynamics tend to ignore any systematic variance in place field size and density across environments, emphasising even coverage and carefully balanced activity (Káli and Dayan, 2000;Samsonovich and McNaughton, 1997). These two classes of model, as well as others (de Cothi and Stachenfeld et al, 2017;Uria et al, 2020), provide competing but not incompatible explanations of hippocampal dynamics but the evidence needed to generate a synthesis is lacking.…”
Section: Main Text Introductionmentioning
confidence: 99%
“…Some units in these deep networks develop response properties broadly similar to those observed in real brains. However, insights into how such artificial neural networks generate the representations observed in their units-something that could, in principle, guide mechanistic hypotheses for the function of natural neural networks-have been slower to come (but see (Cueva et al, 2019;Uria et al, 2020) for progress in uncovering the architectural basis of navigational responses in these networks). In this era of deep learning, a broader question concerns the level of understanding that is appropriate or even possible for the function of large and complex neural networks (Gao and Ganguli, 2015;Hasson et al, 2020;Lillicrap and Kording, 2019;Richards et al, 2019;Saxe et al, 2020;Yamins and DiCarlo, 2016).…”
Section: The CX As a Tractable Deep Recurrent Neural Networkmentioning
confidence: 99%
“…In recent work [65], a recurrent neural network was trained to predict sequences of visual inputs from (the latent space) of variational autoencoders.…”
Section: Discussionmentioning
confidence: 99%