The fatigue-related accident is increasing due to long work hours, medical reasons, and age that decrease response time in a moment of hazard. One of drowsiness and fatigue visual indicators is excessive yawning. In this paper, a non-optical sensor presented as a car dashcam that is used to record driving scenarios and imitates real-life driving situations such as being distracted or talking to a passenger next to the driver. We built a deep CNN model as the classifier to classify each frame as a yawning or non- yawning driver. We can classify the drivers' fatigue into three levels, alert, early fatigue and fatigue based on the judgement of the number of yawns. Alert level means when the driver is not yawning, while, early fatigue is when the driver yawns once in a minute. Fatigued is when the driver yawns more than once in a minute. An overall decision is made by analyzing the source score and the condition of the driver's fatigue state. The robustness of the proposed method was tested under various illumination contexts and a variety of head motion modes. Experiments are conducted using YAWDD dataset that contains 322 subjects to show that our model presents a promising framework to accurately detect drowsiness level in a less complex way.