Human emotions are integral to daily tasks, and driving is now a typical daily task. Creating a multi-modal human emotion dataset in driving tasks is an essential step in human emotion studies. we conducted three experiments to collect multimodal psychological, physiological and behavioural dataset for human emotions (PPB-Emo). In Experiment I, 27 participants were recruited, the in-depth interview method was employed to explore the driver’s viewpoints on driving scenarios that induce different emotions. For Experiment II, 409 participants were recruited, a questionnaire survey was conducted to obtain driving scenarios information that induces human drivers to produce specific emotions, and the results were used as the basis for selecting video-audio stimulus materials. In Experiment III, 40 participants were recruited, and the psychological data and physiological data, as well as their behavioural data were collected of all participants in 280 times driving tasks. The PPB-Emo dataset will largely support the analysis of human emotion in driving tasks. Moreover, The PPB-Emo dataset will also benefit human emotion research in other daily tasks.
Background Clinical prediction models suffer from performance drift as the patient population shifts over time. There is a great need for model updating approaches or modeling frameworks that can effectively use the old and new data. Objective Based on the paradigm of transfer learning, we aimed to develop a novel modeling framework that transfers old knowledge to the new environment for prediction tasks, and contributes to performance drift correction. Methods The proposed predictive modeling framework maintains a logistic regression–based stacking ensemble of 2 gradient boosting machine (GBM) models representing old and new knowledge learned from old and new data, respectively (referred to as transfer learning gradient boosting machine [TransferGBM]). The ensemble learning procedure can dynamically balance the old and new knowledge. Using 2010-2017 electronic health record data on a retrospective cohort of 141,696 patients, we validated TransferGBM for hospital-acquired acute kidney injury prediction. Results The baseline models (ie, transported models) that were trained on 2010 and 2011 data showed significant performance drift in the temporal validation with 2012-2017 data. Refitting these models using updated samples resulted in performance gains in nearly all cases. The proposed TransferGBM model succeeded in achieving uniformly better performance than the refitted models. Conclusions Under the scenario of population shift, incorporating new knowledge while preserving old knowledge is essential for maintaining stable performance. Transfer learning combined with stacking ensemble learning can help achieve a balance of old and new knowledge in a flexible and adaptive way, even in the case of insufficient new data.
.Haze significantly impacts various fields, such as autonomous driving, smart cities, and security monitoring. Deep learning has been proven effective in removing haze from images. However, obtaining pixel-aligned hazy and clear paired images in the real world can be challenging. Therefore, synthesized hazed images are often used for training deep networks. These images are typically generated based on parameters such as depth information and atmospheric scattering coefficient. However, this approach may cause the loss of important haze details, leading to color distortion or incomplete dehazed images. To address this problem, this paper proposes a method for synthesizing hazed images using a cycle generative adversarial network (CycleGAN). The CycleGAN is trained with unpaired hazy and clear images to learn the features of the hazy images. Then, the real haze features are added to clear images using the trained CycleGAN, resulting in well-pixel-aligned synthesized hazy and clear paired images that can be used for dehaze training. The results demonstrate that the dataset synthesized using this method efficiently solves the problem associated with traditional synthesized datasets. Furthermore, the dehazed images are restored using a super-resolution algorithm, enabling the obtainment of high-resolution clear images. This method has broadened the applications of deep learning in haze removal, particularly highlighting its potential in the fields of autonomous driving and smart cities.
Accurate 3D positioning of particles is a critical task in microscopic particle research, with one of the main challenges being the measurement of particle depths. In this paper, we propose a method for detecting particle depths from their blurred images using the depth-from-defocus (DfD) technique and a deep neural network-based object detection framework called You-only-look-once (YOLO). Our method provides simultaneous lateral position information for the particles and has been tested and evaluated on various samples, including synthetic particles, polystyrene particles, blood cells, and plankton, even in a noise-filled environment. We achieved autofocus for target particles in different depths using generative adversarial networks (GANs), obtaining clear-focused images. Our algorithm can process a single multi-target image in 0.008s, allowing real-time application. Our proposed method provides new opportunities for particle field research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.