Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scenecentered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.Keywords: eye movements, visual search, context, global feature, Bayesian model According to feature-integration theory (Treisman & Gelade, 1980), the search for objects requires slow serial scanning because attention is necessary to integrate low-level features into single objects. Current computational models of visual attention based on saliency maps have been inspired by this approach, as it allows a simple and direct implementation of bottom-up attentional mechanisms that are not task specific. Computational models of image saliency (Itti, Koch, & Niebur, 1998;Koch & Ullman, 1985;Parkhurst, Law, & Niebur, 2002;Rosenholtz, 1999) provide some predictions about which regions are likely to attract observers' attention. These models work best in situations in which the image itself provides little semantic information and in which no specific task is driving the observer's exploration. In real-world images, the semantic content of the scene, the co-occurrence of objects, and task constraints have been shown to play a key role in modulating where attention and eye movements go