According to recent theories, perception relies on summary representations that encode statistical information about the sensory environment. Here, we used perceptual priming to characterize the representations that mediate categorization of a complex visual array. Observers judged the average shape or color of a target visual array that was preceded by an irrelevant prime array. Manipulating the variability of task-relevant and task-irrelevant feature information in the prime and target orthogonally, we found that observers were faster to respond when the variability of feature information in the prime and target arrays matched. Critically, this effect occurred irrespective of whether the element-by-element features in the prime and target array overlapped or not, and was even present when prime and target features were drawn from opposing categories. This "priming by variance" phenomenon occurred with prime-target intervals as short as 100 ms. Further experiments showed that this effect did not depend on resource allocation, and occurred even when prime and target did not share the same spatial location. These results suggest that human observers adapt to the variability of visual information, and provide evidence for the existence of a low-level mechanism by which the range or dispersion of visual information is rapidly extracted. This information may in turn help to set the gain of neuronal processing during perceptual choice.decision making | cognitive control W hat information do sensory systems represent, and how do their computations allow us to make judgments about the external world? Canonical theories in perception and cognition suggest that visual neurons code exhaustively for the features or objects that populate natural scenes, from primitive colors and shapes to complex high-dimensional items, such as faces (1, 2). However, any theory of visual representation must account for two striking findings. First, visual judgments can be remarkably blind to local detail: for example, when observers fail to notice the removal of an object from a cluttered natural image, at least when it is outside of the focus of attention (3, 4). Second, both humans and monkeys are extremely good at extracting high-level information (e.g., the presence of an animal or a navigable path) from a scene in a single, rapid glance, despite the almost endless variability in natural images (5-10). One alternative theory that can account for both of these findings argues that the visual system rapidly computes "summary" statistical information about a scene (e.g., the average size of all of the round objects) as opposed to specific features (e.g., the presence of a large round object) (11-13). Encoding summary statistics might offer a crude but efficient representation of the visual world (14) that would facilitate rapid, accurate decisions that are critical for survival (e.g., whether to flee in the face of impending predation, and which route to take), but at the cost of discarding visual detail outside of the focus of attention. ...