We propose a lightweight deep learning approach for gaze estimation representing the visual field as three distinct regions: fovea, near, and far peripheral. Each region is modelled using a gaze parameterization gaze regarding angle-magnitude, latitude, or a combination of angle-magnitude-latitude. We evaluated how accurately these representations can predict a user's gaze across the visual field when trained on data from VR headsets. Our experiments confirmed that the latitude model generates gaze predictions with superior accuracy with an average latency compatible with the demanding real-time functionalities of an untethered device. We generated an outperforming ensemble model with a comparable latency.