When a stimulus is presented, its encoding is known to progress from low-to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low-to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding.Bayesian prior | interreport correlation | bidirectional tilt aftereffect | efficient coding | adaptation theory V isual stimuli evoke neuronal responses (a process termed encoding), which lead to our perceptual estimation of the stimuli (decoding). Experimental studies have firmly established that encoding is hierarchical, progressing from lower-level representations of simpler and less invariant features to higher-level representations of more complex and invariant features along visual pathways (1). Researchers have also studied decoding by using models to relate neuronal responses to perceptual estimation. Most models posit, explicitly or implicitly, that decoding follows the same low-to high-level hierarchy, often in the form of what we call the absolute-to-relative assumption (2-6). For example, these models may decode V1 responses to a line into a perceived orientation of, say 51.2°(or a distribution around it). Psychophysically, this is termed an absolute judgment. To determine the relationship between two lines, the models first decode each orientation separately and the two resulting absolute orientations are then compared. For instance, the two absolute orientations may be subtracted to obtain the angle between the lines (relative orientation), or the sign of the difference may be used to determine whether the second line is clockwise or counterclockwise from the first (ordinal orientation discrimination). Absolute orientation of a single line is a simpler, less invariant, lower-...