Recently, the need for research on tracking driver gazes is increasing owing to the development of driver convenience systems, such as autonomous driving or intelligent driver monitoring system, to address traffic accidents caused by negligence. A camera is installed in a vehicle to track the driver's gaze in vehicle environment. The accuracy of estimating driver gazes in vehicle environments reduces if a motion blur of the driver occurs, owing to vehicular vibrations during driving. Most past studies on gaze-tracking of a driver in a vehicle did not consider the motion blurs in their experiments. To address this concern, we propose a method for improving the accuracy of gaze estimation by deblurring the blurred images of a driver from the vehicle. This study is the first attempt to calculate a driver's gaze by deblurring a motion blurred image with CycleGAN, whereas simultaneously using the image information from the two cameras in the vehicle. In previous studies, multiple deep CNNs were used for obtaining the images of a driver's eyes and face. In this study, information obtained from the two cameras in the vehicle are integrated into an image with three channels and thereafter deblurred, consequently reducing the time required for training. Whereas in previous studies the gaze position was not calculated for severe blurs by measuring the level of blur from the input image, the gaze position was calculated for all the input images in this study. From the database (Dongguk blurred gaze database (DBGD)) from 26 drivers in actual vehicles and the Columbia gaze dataset (CAVE-DB) that is an open database, the proposed method exhibited greater accuracy than the existing methods.