Abstract-Gaze estimation systems use calibration procedures that require active subject participation to estimate the point-ofgaze accurately. Consequently, these systems do not support covert monitoring of visual scanning patterns. This paper presents a novel gaze estimation methodology that does not use calibration procedures that require active user participation. This methodology uses multiple infrared light sources for illumination and a stereo pair of video cameras to obtain images of the eyes. Each pair of images is analyzed and the centers of the pupils and the centers of curvature of the corneas are estimated. These points, which are estimated without a personal calibration procedure, define the optical axis of each eye. To estimate the point-of-gaze, which lies along the visual axis, the angle between the optical and visual axes is estimated by a procedure that minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that for a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the RMS error of the estimated point-of-gaze is 17.8 mm (1.3º).