76Our environment contains a large amount of visual information, such as different objects, buildings, and faces, making it impossible to process this complex information at once. In order to compensate for the limitations of the visual system, attention is allocated to the most relevant visual information. Attention can be guided by bottom-up and top-down processes. That is, a specific visual feature can attract attention (e.g., a fast-moving object or a bright color), resulting in an eye movement to the source of information (bottom-up process). In addition, knowledge about our environment can guide attention (top-down process).One of the top-down mechanisms that guide attention is a result of implicit contextual learning, as was initially shown by Chun and Jiang (1998). They suggested that visual information from our environment can be learned implicitly and can subsequently guide attention to a specific target location. That is, the association between a target and its surrounding visual context (such as spatial information) can be memorized, improving performance on a visual search task. The contextual-cuing paradigm is typically used to study implicit contextual learning of spatial information. It involves a visual search task, in which a rotated target stimulus (T) is presented among a number of rotated distractors (Ls). The participants have to locate the target as quickly as possible and indicate the direction of rotation. Half of the spatial configurations (i.e., positions of the stimuli) are repeated during the experiment. Interestingly, response times are shorter when the configurations are repeated than when they are new, indicating that contextual information was memorized (Chun, 2000;Chun & Jiang, 1998;Peterson & Kramer, 2001). Participants are not aware of the repetitions and perform at chance level on a recognition memory task, indicating that it is an implicit memory process. This effect is found after a few repetitions (Chun & Jiang, 1998) and remains for weeks after testing (Chun & Jiang, 2003;Jiang, Song, & Rigas, 2005), indicating that it is a robust mechanism.Recently, Brady and Chun (2007) proposed a model of implicit contextual learning, based on the idea that contextual learning results from the pairwise statistical association between the distractor locations and the target. They used this model to predict the outcome of different contextual-cuing tasks. An important and interesting aspect of the model is that it includes a spatial constraint, assuming that learning is restricted to the local area around the target. Thus, a very limited amount of contextual information is learned, which is spatially close to the target. They stated that "observers may be encoding just one snapshot of the local context surrounding the target when it is detected" (p. 813). Brady and Chun tested this model by comparing modeling results with behavioral results under different task conditions and found that the model was accurate in predicting the behavioral results of various experimental studies. The idea...