The accurate and stable identification of the traffic load distribution on the bridge deck is of great significance to bridge health monitoring and safety early warning. To accomplish this task, we have combined the weigh-in-motion system (WIMs) with machine vision and developed a traffic load monitoring (TLM) technology for the whole bridge deck. For bridge health monitoring, the TLM should be available for online structural analysis, have high accuracy, and be able to adapt to changes in lighting conditions. However, existing TLM methods are difficult to meet the requirements of real-time, accuracy, and lighting robustness simultaneously. In this regard, this paper proposes an improved full-bridge TLM method based on YOLO-v3 convolutional neural network. The core of this method includes training a dual-target detection model and correcting vehicle locations. The detection model can identify profiles of the entire vehicle and its tail and can mark them with compact rectangular boxes. Based on the corner points of these rectangular boxes, an optical geometry model is proposed to measure vehicle dimensions and correct vehicle centroids, thereby the vehicle locations can be estimated more accurately. By applying the time synchronization of cameras and the WIMs, each measured load is paired with the vehicle "pixel cluster" detected in the video; further, the traffic load distribution on the whole bridge deck is identified accurately in real-time. Verified by the field data of a ramp bridge, the proposed method is proved more accurately on the identification of vehicle locations, more robust lighting adaptability, and faster calculation speed, which can meet the requirements of field monitoring of traffic load distribution.