Monocular systems are attractive because of their relatively low cost as well as their ease of calibration. However, they suffer of scale ambiguity due to the loss of one dimension when projecting the three-dimensional world onto a two-dimensional image plane. This paper presents a method of resolving the scale ambiguity and drift observed in a monocular camera-based visual odometry by using the slant distance obtained from a skyline matching between the camera and images synthesized using a 3D building model. The obtained visual odometry outputs are then combined with the solutions obtained from the skyline-based positioning for vehicular applications in Global Navigation Satellite Systems-denied/harsh environments such as deep urban canyons. Experiments conducted in downtown Calgary have shown the advantage of correcting the scale factor resulting in a 90% improvement in position solution compared to not correcting the scale drift suggesting the potential of the proposed method for critical applications such as autonomous driving or driver-assistance systems in areas where the 3D building model is available.