Estimation of human emotions plays an important role in the development of modern brain-computer interface devices like the Emotiv EPOC+ headset. In this paper, we present an experiment to assess the classification accuracy of the emotional states provided by the headset’s application programming interface (API). In this experiment, several sets of images selected from the International Affective Picture System (IAPS) dataset are shown to sixteen participants wearing the headset. Firstly, the participants’ responses in form of a self-assessment manikin questionnaire to the emotions elicited are compared with the validated IAPS predefined valence, arousal and dominance values. After statistically demonstrating that the responses are highly correlated with the IAPS values, several artificial neural networks (ANNs) based on the multilayer perceptron architecture are tested to calculate the classification accuracy of the Emotiv EPOC+ API emotional outcomes. The best result is obtained for an ANN configuration with three hidden layers, and 30, 8 and 3 neurons for layers 1, 2 and 3, respectively. This configuration offers 85% classification accuracy, which means that the emotional estimation provided by the headset can be used with high confidence in real-time applications that are based on users’ emotional states. Thus the emotional states given by the headset’s API may be used with no further processing of the electroencephalogram signals acquired from the scalp, which would add a level of difficulty.
The ability to recognise facial emotions is essential for successful social interaction. The most common stimuli used when evaluating this ability are photographs. Although these stimuli have proved to be valid, they do not offer the level of realism that virtual humans have achieved. The objective of the present paper is the validation of a new set of dynamic virtual faces (DVFs) that mimic the six basic emotions plus the neutral expression. The faces are prepared to be observed with low and high dynamism, and from front and side views. For this purpose, 204 healthy participants, stratified by gender, age and education level, were recruited for assessing their facial affect recognition with the set of DVFs. The accuracy in responses was compared with the already validated Penn Emotion Recognition Test (ER-40). The results showed that DVFs were as valid as standardised natural faces for accurately recreating human-like facial expressions. The overall accuracy in the identification of emotions was higher for the DVFs (88.25%) than for the ER-40 faces (82.60%). The percentage of hits of each DVF emotion was high, especially for neutral expression and happiness emotion. No statistically significant differences were discovered regarding gender. Nor were significant differences found between younger adults and adults over 60 years. Moreover, there is an increase of hits for avatar faces showing a greater dynamism, as well as front views of the DVFs compared to their profile presentations. DVFs are as valid as standardised natural faces for accurately recreating human-like facial expressions of emotions.
Auditory hallucinations are common and distressing symptoms of the schizophrenia disease. It is commonly treated with pharmacological approaches but, unfortunately, such an approach is not effective in all patients. In the cases in which the use of antipsychotic drugs is not possible or not recommended, psychotherapeutic interventions are used to help patients gain power and control against hearing voices. Recently, virtual reality technologies have been incorporated to this type of therapies. A virtual representation of their voice (avatar) is created in a controlled computer-based environment, and the patient is encouraged to confront it. Unfortunately, the software tools used in these therapies are not described in depth and, even more important, to the best of our knowledge, their usability, utility and intention to use by therapists, and patients have not been evaluated enough. The involvement of end users in the software development is beneficial in obtaining useful and usable tools. Hence, the two contributions of this paper are (1) the description of an avatar creation system and the main technical details of the configuration of auditory hallucination avatars, and (2) its evaluation from both the therapists’ and the patients’ viewpoints. The evaluation does not only focus on usability, but also assesses the acceptance of the technology as an important indicator of the future use of a new technological tool. Moreover, the most important results, the lessons learned and the main limitations of our study are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.