The majority of facial recognition systems depend on the accurate location of both the left and right eye centers in an effort to geometrically normalize the face images available in a database under study. In this paper, we propose a novel pupil detection algorithmic approach that automatically and efficiently locates the eye centers of face images captured using visible and infrared sensors when operating under challenging conditions. Our proposed approach includes the usage of long-range and night-time face images captured in the infrared (IR) band under active illumination. It also efficiently deals with partial face obstruction (subjects wearing eye glasses) as well as face pose and illumination variation. Our scenario-adaptable methodological approach involves a number of algorithmic steps, including (i) situation classification (to automatically determine the acquisition scenario each input face image is coming from), (ii) generation and usage of 2D normalized correlation coefficients, (iii) prediction of eye localization, (iv) computation of summation range filters for accurate pupil detection, and an (v) eye glasses classifier on facial images using support vector machines (to determine when subjects are wearing glasses or not). Our proposed approach is compared against state-of-the-art academic and commercial eye detection algorithms, including the (a) Viola and Jones Adaboost method, (b) one of the latest academic eye detection algorithms proposed by Valenti and Gevers (IEEE TPAMI), and (c) the eye detection approach that is available as part of the G8 commercial face recognition software B Cameron Whitelam (package provided by L1 systems). Experimental results demonstrate that our proposed approach outperforms all other approaches when applied on various challenging face datasets, including IR face images captured behind tinted glass or at long ranges up to about 350 feet away, day or night. We also show the benefits of our approach to face normalization and, as a result, face recognition improvement performance in terms of rank-1 identification rates. This is an important achievement that has practical value for forensic tool operators who may have to manually localize the eye centers of all face images available in a dataset, before further face image preprocessing and face-based matching algorithms can be applied.