New applications are continuously appearing with drones as protagonists, but all of them share an essential critical maneuver—landing. New application requirements have led the study of novel landing strategies, in which vision systems have played and continue to play a key role. Generally, the new applications use the control and navigation systems embedded in the aircraft. However, the internal dynamics of these systems, initially focused on other tasks such as the smoothing trajectories between different waypoints, can trigger undesired behaviors. In this paper, we propose a landing system based on monocular vision and navigation information to estimate the helipad global position. In addition, the global estimation system includes a position error correction module by cylinder space transformation and a filtering system with a sliding window. To conclude, the landing system is evaluated with three quality metrics, showing how the proposed correction system together with stationary filtering improves the raw landing system.