The rate of annual road accidents attributed to drowsy driving are significantly high. Due to this, researchers have proposed several methods aimed at detecting drivers' drowsiness. These methods include subjective, physiological, behavioral, vehicle-based, and hybrid methods. However, recent reports on road safety are still indicating drowsy driving as a major cause of road accidents. This is plausible because the current driver drowsiness detection (DDD) solutions are either intrusive or expensive, thus hindering their ubiquitous nature. This research serves to bridge this gap by providing a test-bed for achieving a non-intrusive and low-cost DDD solution. A behavioral DDD solution is proposed based on tracking the face and eye state of the driver. The aim is to make this research an inception to DDD pervasiveness. To achieve this, National Tsing Hua University (NTHU) Computer Vision Lab's driver drowsiness detection video dataset was utilized. Several video and image processing operations were performed on the videos so as to detect the drivers' eye state. From the eye states, three important drowsiness features were extracted: percentage of eyelid closure (PERCLOS), blink frequency (BF), and Maximum Closure Duration (MCD) of the eyes. These features were then fed as inputs into several machine learning models for drowsiness classification. Models from the K-nearest Neighbors (KNN), Support Vector Machine (SVM), Logistic Regression, and Artificial Neural Networks (ANN) machine learning algorithms were experimented. These models were evaluated by calculating their accuracy, sensitivity, specificity, miss rate, and false alarm rate values. Although these five metrics were evaluated, the focus was more on getting optimal accuracies and miss rates. The result shows that the best models were a KNN model when k = 31 and an ANN model that used an Adadelta optimizer with 3 hidden layer network of 3, 27, and 9 neurons respective. The KNN model obtained an accuracy of 72.25% with a miss rate of 16.67%, while the ANN model obtained 71.61% and 14.44% accuracy and miss rate respectively.