In this work, we present a new dataset and a computational strategy for a digital coach that aims to guide users in practicing the protocols of self-attachment therapy. Our framework augments a rule-based conversational agent with a deep-learning classifier for identifying the underlying emotion in a user's text response, as well as a deep-learning assisted retrieval method for producing novel, fluent and empathetic utterances. We also craft a set of human-like personas that users can choose to interact with. Our goal is to achieve a high level of engagement during virtual therapy sessions. We evaluate the effectiveness of our framework in a non-clinical trial with N=16 participants, all of whom have had at least four interactions with the agent over the course of five days. We find that our platform is consistently rated higher for empathy, user engagement and usefulness than the simple rule-based framework. Finally, we provide guidelines to further improve the design and performance of the application, in accordance with the feedback received.
INTRODUCTION:Self-attachment is a new self-administrable psychotherapeutic intervention based on creating an affectional bond between the user and their childhood-self using their childhood photos to develop the capacity for affect self-regulation. Technological advances, such as virtual reality (VR), can enhance the procedure of this intervention and make it scalable. METHODS:We have developed a user-friendly, interactive VR platform for self-attachment featuring a virtual assistant and a customised child avatar that resembles the user in their childhood. The virtual agent interacts with the user and using an emotion recognition algorithm can provide suggestions for the user to undertake an appropriate self-attachment sub-protocol. Furthermore, the platform allows user interaction with the child avatar, such as embracing the avatar. RESULTS:We show by a small preliminary trial that such a VR experience can be realistic, leading to a positive emotion change in the user.
In this work, we propose a computational framework that leverages existing out-of-language data to create a conversational agent for the delivery of Self-Attachment Technique (SAT) in Mandarin. Our framework does not require large-scale human translations, yet it achieves a comparable performance whilst also maintaining safety and reliability. We propose two different methods of augmenting available response data through empathetic rewriting. We evaluate our chatbot against a previous, English-only SAT chatbot through non-clinical human trials (N = 42), each lasting five days, and quantitatively show that we are able to attain a comparable level of performance to the English SAT chatbot. We provide qualitative analysis on the limitations of our study and suggestions with the aim of guiding future improvements.
Emotion recognition from facial visual signals is a challenge which has attracted enormous interest over the past two decades. Researchers are attempting to teach computers to better understand a person's emotional state. Providing emotion recognition can massively enrich experiences. The benefits of this research for human-computer interactions are limitless. Emotions are intricate, and so we need a representative model of the full spectrum displayed by humans. A multi-dimensional emotion representation, which includes valence (how positive an emotion) and arousal (how calming or exciting an emotion), is a good fit. Virtual Reality (VR), a fully immersive computergenerated world, has witnessed significant growth over the past years. It has a wide range of applications including in mental health, such as exposure therapy and the self-attachment technique. In this paper, we address the problem of emotion recognition when the user is immersed in VR. Understanding emotions from facial cues is in itself a demanding task. It is made even harder when a head-mounted VR headset is worn, as now an occlusion blocks the upper half of the face. We attempt to overcome this issue by introducing EmoFAN-VR, a deep neural network architecture, to analyse facial affect in the presence of a severe occlusion from a VR headset with a high level of accuracy. We simulate an occlusion representing a VR headset and apply it to all datasets in this work. EmoFAN-VR predicts both discrete and continuous emotions in one step, meaning it can be used in real-time deployment. We fine-tune our network on the AffectNet dataset under VR occlusion and test it on the AFEW-VA dataset, setting a new baseline for this dataset whilst under VR occlusion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.