A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space.scene-selective visual cortex | occipital place area | affordances | navigation | dorsal stream I t has long been hypothesized that perceptual systems are optimized for the processing of features that afford ecologically important behaviors (1, 2). This perspective has gained renewed support from recent work on action planning, which suggests that the action system continually prepares multiple, parallel plans that are appropriate for the environment (3, 4). If this view is correct, then sensory systems should be routinely engaged in identifying the potential of the environment for action, and they should explicitly and automatically encode these action affordances. Here we explore this idea for spatial navigation, a behavior that is essential for survival and ubiquitous among mobile organisms.A critical component of spatial navigation is the ability to understand where one can and cannot go in the local environment: for example, knowing that one can exit a room through a corridor or a doorway but not through a window or a painting on the wall. We reasoned that if perceptual systems routinely extract the parameters of the environment that delimit potential actions, then these navigational affordances should be automatically encoded during scene perception, even when subjects are not engaged in a navigational task.Previous work has shown that observers can determine the overall navigability of a scene-for example, whether it is possible to move through the scene or not-from a brief glance (5). However, no study has examined the coding of fine-grained navigational affordances, such as whether the direction one can move in the scene is to the left or to the right. Furthermore, only recently have investigators begun to characterize the affordance properties of scenes, and this work has focused not on navigational affordances but on more abstract behavioral events that can be used to define scene categories, such as cooking and sleeping (6). It therefore remains unknown whether navigational affordances play a fundamental role in the perception of visual scenes, an...