Drowsiness detection is a crucial step for safe driving. A plethora of efforts has been invested on using pervasive sensor data (e. g., video, physiology) empowered by machine learning to build an automatic drowsiness detection system. Nevertheless, most of the existing methods are based on complicated wearables (e. g., electroencephalogram) or computer vision algorithms (e. g., eye state analysis), which makes the relevant systems hardly applicable in the wild. Furthermore, data based on these methods are insufficient in nature due to limited simulation experiments. In this light, we propose a novel and easily implemented method based on full non-invasive multimodal machine learning analysis for the driver drowsiness detection task. The drowsiness level was estimated by self-reported questionnaire in pre-designed protocols. First, we consider involving environmental data (e. g., temperature, humidity, illuminance, and further more), which can be regarded as complementary information for the human activity data recorded via accelerometers or actigraphs. Second, we demonstrate that the models trained by daily life data can still be efficient to make predictions for the subject performing in a simulator, which may benefit the future data collection methods. Finally, we make a comprehensive study on investigating different machine learning methods including classic 'shallow' models and recent deep models. Experimental results show that, our proposed methods can reach 64.6% unweighted average recall for drowsiness detection in a subject-independent scenario.