Springer New York. ISSN : 1866-9956International audienceWhen looking at a scene, we frequently move our eyes to place consecutive interesting regions on the fovea, the retina centre. At each fixation, only this specific foveal region is analyzed in detail by the visual system. The visual attention mechanisms control eye movements and depend on two types of factor: bottom-up and top-down factors. Bottom-up factors include different visual features such as colour, luminance, edges, orientations, etc. In this paper, we evaluate quantitatively the relative contribution of basic low-level features as candidate guiding factors to visual attention and hence to eye movements. We also study how these visual features can be combined in a bottom-up saliency model. Our work consists of three interactive parts: a functional saliency model, a statistical model and eye movement data recorded during free viewing of natural scenes. The functional saliency model, inspired by the primate visual system, decomposes a visual scene into different feature maps. The statistical model indicates which features best explain the recorded eye movements. We show an essential role of high frequency luminance and an important contribution of central fixation bias. The relative contribution of features, calculated by the statistical model, is then used to combine the different feature maps into a saliency map. Finally, the comparison between the saliency model and experimental data confirmed the influence of these contributions