Humans have the capacity to move their eyes. Thanks to this capacity, they can orient their gaze to look at a relevant object inside a complex scene. In this paper, we have implemented a driver assistance application which tries to mimic this human capacity. We focus specially on highway driving situations, where the detection of obstacles must be done far away in front of the car. The implementation of the gaze control and orientation is obtained by an active vision system. We know that the human gaze is related to the visual attention which is a result of human perception and cognitive phenomenon. Several studies have shown that human perception and more specially the visual perception can be decomposed into a bottom-up process and a top-down process. Most of researches focused on the bottom-up process. In this work, in order to mimic the human behavior or at least improve vision systems, we use a new active stereovision setup and a model of the human visual perception based on the two previous approaches. Moreover, in a bottom-up approach, we add the depth information obtained with the stereoscopic sensor to the classical features used by other works. The topdown process is computed by the global knowledge of the scene and its features. Some results obtained by mean of a virtual road sequence, show the orientation of the field of view of the stereoscopic sensor toward relevant objects, given our criteria.