The ability to stably maintain visual information over brief delays is central to cognitive functioning. One possible way to achieve robust working memory maintenance is by having multiple concurrent mnemonic representations across multiple cortical loci. For example, early visual cortex might contribute to storage by representing information in a 'sensory-like' format, while intraparietal sulcus uses a format transformed away from sensory driven responses. As an explicit test of mnemonic code transformations along the visual hierarchy, we quantitatively modeled the progression of veridical-to-categorical orientation representations in human participants. Participants directly viewed, or held in mind, an oriented grating pattern, and the similarity between fMRI activation patterns for different orientations was calculated throughout retinotopic cortex. During direct perception, similarity was clustered around cardinal orientations, while during working memory the obliques were represented more similarly. We modeled these similarity patterns based on the known distribution of orientation information in the natural world: The 'veridical' model uses an efficient coding framework to capture hypothesized representations during visual perception. The 'categorical' model assumes that different 'psychological distances' between orientations result in orientation categorization relative to cardinal axes. During direct perception, the veridical model explained the data well in early visual areas, while the categorical model did worse. During working memory, the veridical model only explained some of the data, while the categorical model gradually gained explanatory power for increasingly anterior retinotopic regions. These findings suggest that directly viewed images are represented veridically, but once visual information is no longer tethered to the sensory world, there is a gradual progression to more categorical mnemonic formats along the visual hierarchy.