Drowsy driving causes many accidents. Driver alertness and automobile control are challenged. Thus, a driver drowsiness detection system is becoming a necessity. In fact, invasive approaches that analyze electroencephalography signals with head electrodes are inconvenient for drivers. Other non-invasive fatigue detection studies focus on yawning or eye blinks. The analysis of several facial components has yielded promising results, but it is not yet enough to predict hypovigilance. In this paper, we propose a “non-invasive” approach based on a deep learning model to classify vigilance into five states. The first step is using MediaPipe Face Mesh to identify the target areas. This step calculates the driver’s gaze and eye state descriptors and the 3D head position. The detection of the iris area of interest allows us to compute a normalized image to identify the state of the eyes relative to the eyelids. A transfer learning step by the MobileNetV3 model is performed on the normalized images to extract more descriptors from the driver’s eyes. Our LSTM network entries are vectors of the previously calculated features. Indeed, this type of learning allows us to determine the state of hypovigilance before it arrives by considering the previous learning steps, classifying the levels of vigilance into five categories, and alerting the driver before the state of hypovigilance’s arrival. Our experimental study shows a 98.4% satisfaction rate compared to the literature. In fact, our experimentation begins with the hyperparameter preselection to improve our results.