Deep Learning-based object detectors enhance the capabilities of remote sensing platforms, such as Unmanned Aerial Vehicles (UAVs), in a wide spectrum of machine vision applications. However, the integration of deep learning introduces heavy computational requirements, preventing the deployment of such algorithms in scenarios that impose lowlatency constraints during inference, in order to make missioncritical decisions in real-time. In this paper, we address the challenge of efficient deployment of region-based object detectors in aerial imagery, by introducing an informed methodology for extracting candidate detection regions (proposals). Our approach considers information from the UAV on-board sensors, such as flying altitude and light-weight computer vision filters, along with prior domain knowledge to intelligently decrease the number of region proposals by eliminating false-positives at an early stage of the computation, reducing significantly the computational workload while sustaining the detection accuracy. We apply and evaluate the proposed approach on the task of vehicle detection. Our experiments demonstrate that state-of-the-art detection models can achieve up to 2.6x faster inference by employing our altitude-aware data-driven methodology. Alongside, we introduce and provide to the community a novel vehicle-annotated and altitude-stamped dataset of real UAV imagery, captured at numerous flying heights under a wide span of traffic scenarios.