In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.
The ex utero intrapartum treatment (EXIT) procedure was introduced to reduce fetal hypoxic damage while establishing an airway in fetuses with upper and lower airway obstruction. Delivery of the fetal head and shoulders while maintaining the uteroplacental circulation offers time to secure the fetal airway. Here, we report two cases of EXIT procedure for fetal airway obstruction, which were successfully managed with extensive preoperative planning by a professional multidisciplinary team.
This paper presents a framework to better identify and measure defects in a bridge using drone-based inspection images integrated with grayscale image enhancement techniques. For this study, a DJI Matrice 210 drone was used for the inspection of a three-span timber bridge with concrete decking located in Keystone, South Dakota. During the inspection, the drone recorded a series of videos of the bridge using the MOVie (MOV, video file extension) video format. MOV-based image analysis was conducted to identify a variety of defect types (i.e., efflorescence, water leakage, spalling, and discoloration) on the bridge. For improvement of defect visibility, the grayscale image enhancement technique was applied to determine visually enhanced images for the individual defect. The technique used grayscale image histogram processing that can adjust images using realignment of contrast histograms, in which contrasts of each pixel of the grayscale images have their own number from 0 for black to 255 for white in the image. With the enhanced images, pixel-based measurement was conducted to quantify the defects, including efflorescence (3.75 m2), water leakage (4.21 m2), spalling (0.74 m2), and discoloration (2.12 m2). Based on these findings, the grayscale drone inspection image enhancement technique enabled the demonstration of defect visibility adjustment and improvement for more reliable identification and measurement of the defects in the bridge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.