Cognitive maps enable us to learn the layout of environments, encode and retrieve episodic memories, and navigate vicariously for mental evaluation of options. A unifying model of cognitive maps will need to 5 explain how the maps can be learned scalably with sensory observations that are non-unique over multiple spatial locations (aliased), retrieved efficiently in the face of uncertainty, and form the fabric of efficient hierarchical planning. We propose learning higher-order graphs -structured in a specific way that allows efficient learning, hierarchy formation, and inference -as the general principle that connects these different desiderata. We show that these graphs can be learned efficiently from experienced sequences using a 10 cloned Hidden Markov Model (CHMM), and uncertainty-aware planning can be achieved using messagepassing inference. Using diverse experimental settings, we show that CHMMs can be used to explain the emergence of context-specific representations, formation of transferable structural knowledge, transitive inference, shortcut finding in novel spaces, remapping of place cells, and hierarchical planning. Structured higher-order graph learning and probabilistic inference might provide a simple unifying framework for un-15 derstanding hippocampal function, and a pathway for relational abstractions in artificial intelligence.properties of place cells and grid cells [8]. Yet another recent model casts spatial and non-spatial problems as a connected graph with neural responses as efficient representations of this graph [9]. Unfortunately, 30 both these models fail to explain several experimental observations such as the discovery of place cells that encode routes [10,11], remapping in place cells [12], a recent discovery of place cells that do not encode goal value [13], and flexible planning after learning the environment.Here, we propose that learning higher-order graphs of sequential events might be an underlying principle of cognitive maps, and propose a specific representational structure that aids in learning, memory integration, 35 retrieval of episodes, and navigation. In particular, we demonstrate that this representational structure can be represented as a probabilistic sequence model -the cloned Hidden Markov Model (CHMM). We show that sequence learning in CHMMs can explain a variety of cognitive maps phenomena such as discovering spatial maps from random walks under aliased and disjoint sensory experiences, transferable structural knowledge, finding shortcuts, and hierarchical planning and physiological findings such as remapping of place cells, 40 and route-specific encoding. Notably, all these properties emerge from a simple model that is easy to train, scale, and perform inference on.
Cloned Hidden Markov Model as a model of cognitive mapsCHMMs are based on Dynamic Markov Coding (DMC) [14], an idea for representing higher-order sequences by splitting, or cloning, observed states. For example, a first order Markov chain representing the 45 sequences A-C-E and B-C-D will also assi...