Automatic prediction of emotion promises to revolutionise human-computer interaction. Recent trends involve fusion of multiple modalities − audio, visual, and physiological − to classify emotional state. However, practical considerations 'in the wild' limit collection of this physiological data to commoditised heartbeat sensors. Furthermore, real-world applications often require some measure of uncertainty over model output. We present here an end-to-end deep learning model for classifying emotional valence from unimodal heartbeat data. We further propose a Bayesian framework for modelling uncertainty over valence predictions, and describe a procedure for tuning output according to varying demands on confidence. We benchmarked our framework against two established datasets within the field and achieved peak classification accuracy of 90%. These results lay the foundation for applications of affective computing in realworld domains such as healthcare, where a high premium is placed on non-invasive collection of data, and predictive certainty.
Automatic detection of emotion has the potential to revolutionize mental health and wellbeing. Recent work has been successful in predicting affect from unimodal electrocardiogram (ECG) data. However, to be immediately relevant for real-world applications, physiology-based emotion detection must make use of ubiquitous photoplethysmogram (PPG) data collected by affordable consumer fitness trackers. Additionally, applications of emotion detection in healthcare settings will require some measure of uncertainty over model predictions. We present here a Bayesian deep learning model for end-to-end classification of emotional valence, using only the unimodal heartbeat time series collected by a consumer fitness tracker (Garmin Vívosmart 3). We collected a new dataset for this task, and report a peak F1 score of 0.7. This demonstrates a practical relevance of physiology-based emotion detection 'in the wild' today.
Here, we have developed a deep learning method to fully automatically detect and quantify six main clinically relevant atrophic features associated with macular atrophy (MA) using optical coherence tomography (OCT) analysis of patients with wet age-related macular degeneration (AMD). The development of MA in patients with AMD results in irreversible blindness, and there is currently no effective method of early diagnosis of this condition, despite the recent development of unique treatments. Using OCT dataset of a total of 2211 B-scans from 45 volumetric scans of 8 patients, a convolutional neural network using one-against-all strategy was trained to present all six atrophic features followed by a validation to evaluate the performance of the models. The model predictive performance has achieved a mean dice similarity coefficient score of 0.706 ± 0.039, a mean Precision score of 0.834 ± 0.048, and a mean Sensitivity score of 0.615 ± 0.051. These results show the unique potential of using artificially intelligence-aided methods for early detection and identification of the progression of MA in wet AMD, which can further support and assist clinical decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.