With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.
Advances in technologies, such as intelligent connected vehicles and the metaverse are driving the rapid development of automotive intelligent cockpits. From the perspective of the cyber-physical-social system (CPSS), this study proposed the intelligent cockpit composition framework which includes three layers of perception, cognition and decision, and interaction. Meanwhile, we also describe the relationship between the intelligent cockpit framework and the outside environment. The framework can dynamically perceive and understand humans, and provide feedback on the understanding results, which is beneficial to provide a safe, efficient, and enjoyable experience for humans in the intelligent cockpit. In the cognition and decision layers of the proposed framework, we design a case study of active empathetic auditory regulation of driver anger, focusing on improving road traffic safety. We conducted an in-depth interview experiment and designed two auditory regulation materials of active empathy speech and text-to-speech (TTS) speech. Next, 30 participants were recruited, and they completed a total of 240 anger-regulated driving experiments in the straight and obstacle avoidance scenarios. Finally, we quantitatively analyzed and compared the participants' subjective feelings, physiological changes, driving behaviors, and driving risks, as well as validated the driver anger regulation quality of AES and TTS. The proposed research methods results are beneficial to the design of future intelligent cockpit emotion regulation systems, toward a better intelligent cockpit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.