SummaryWhen mammals navigate in the physical environment, specific neurons such as grid-cells, head-direction cells, and place-cells activate to represent the navigable surface, the faced direction of movement, and the specific location the animal is visiting. Here we test the hypothesis that these codes are also activated when humans navigate abstract language-based representational spaces. Human participants learnt the meaning of novel words as arbitrary signs referring to specific artificial audiovisual objects varying in size and sound. Next, they were presented with sequences of words and asked to process them semantically while we recorded the activity of their brain using fMRI. Processing words in sequence was conceivable as movements in the semantic space, thus enabling us to systematically search for the different types of neuronal coding schemes known to represent space during navigation. By applying a combination of representational similarity and fMRI-adaptation analyses, we found evidence of i) a grid-like code in the right postero-medial entorhinal cortex, representing the general bidimensional layout of the novel semantic space; ii) a head-direction-like code in parietal cortex and striatum, representing the faced direction of movements between concepts; and iii) a place-like code in medial prefrontal, orbitofrontal, and mid cingulate cortices, representing the Euclidean distance between concepts. We also found evidence that the brain represents 1-dimensional distances between word meanings along individual sensory dimensions: implied size was encoded in secondary visual areas, and implied sound in Heschl’s gyrus/Insula. These results reveal that mentally navigating between 2D word meanings is supported by a network of brain regions hosting a variety of spatial codes, partially overlapping with those recruited for navigation in physical space.