In this paper, we explore how a visual system equipped with a pair of frontally-placed eyes/cameras can rapidly estimate egomotion and depths for the task of locomotion by exploiting the eye topography. We eschew the traditional approach of motion-stereo integration, as finding stereo correspondence is a computationally expensive operation. Instead, we propose a quasi-parallax scheme by pairing appropriate visual rays together, thus obviating the need for stereo correspondence and yet being able to leverage on the redundant information present in the binocular overlap. Our model covers realistic visual systems where the two eyes might deviate from the strictly frontal-parallel configuration, and yet the results show that the advantages of the parallax-based approach are retained. In particular, it offers better disambiguation of translation and rotation over conventional two-frame structure from motion approaches, despite not having views covering diametrically opposing directions like that of spherical eyes or laterally-placed eyes. The rapid processing that such scheme entails seems to offer a more realizable and useful alternative for depth recovery during locomotion.