Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as "body language" and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g. human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce, there is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations.
In the education process, students face problems with understanding due to the complexity, necessity of abstract thinking and concepts. More and more educational centres around the world have started to introduce powerful new technology-based tools that help meet the needs of the diverse student population. Over the last several years, virtual reality (VR) has moved from being the purview of gaming to professional development. It plays an important role in teaching process, providing an interesting and engaging way of acquiring information. What follows is an overview of the big trend, opportunities and concerns associated with VR in education. We present new opportunities in VR and put together the most interesting, recent virtual reality applications used in education in relation to several education areas such as general, engineering and health-related education. Additionally, this survey contributes by presenting methods for creating scenarios and different approaches for testing and validation. Lastly, we conclude and discuss future directions of VR and its potential to improve the learning experience.Information 2019, 10, 318 2 of 20 must be carried out under supervision; therefore, students cannot self-configure lab equipment, experience states of emergency or effects of misconfiguration which may lead to equipment damage. Moreover, there is no possibility to practice and catch up outside the laboratory schedule. Currently, the solutions are modern technologies such as online courses [8,9], blended learning [10-13], different computer-based platforms [14-18] and many others, which allow the students to repeat several times the same topic, make mistakes and learn from them. Numerous examples of hardware and software which have been successful in educational processes indicate that edtech industry can improve learning outcomes for the majority of students [19]. More and more educational centres around the world are starting to introduce powerful new technology tools that help them to meet the needs of diverse student populations. Traditional books are being replaced by digital instructional content (especially from open educational resources) [20]. Notebooks, tablets or cell phones with dedicated application have replaced classical copybooks [21]. Distance [22] and personalised learning [23] are used to tailor education to each student's academic strengths, weaknesses, preferences and goals.It is well known that the use of information and communication technologies have been found to improve student attitudes towards learning [24][25][26][27]. It is a rapidly growing field of research, continually developing and looking for new technological solutions. Over the last several years, Virtual Reality (VR), which provides an interactive computer-generated environment, has moved from being the purview of the gaming to the professional development such as military, psychology, medicine and teaching applications.In 1987, Jaron Lanier, together with Steve Bryson, formulated the first definition of VR, which they desc...
People express emotions through different modalities. Utilization of both verbal and nonverbal communication channels allows to create a system in which the emotional state is expressed more clearly and therefore easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. This article presents analysis of audiovisual information to recognize human emotions. A cross-corpus evaluation is done using three different databases as the training set (SAVEE, eNTERFACE'05 and RML) and AFEW (database simulating realworld conditions) as a testing set. Emotional speech is represented by commonly known audio and spectral features as well as MFCC coefficients. The SVM algorithm has been used for classification. In case of facial expression, faces in key frames are found using Viola-Jones face recognition algorithm and facial image emotion classification done by CNN (AlexNet). Multimodal emotion recognition is based on decision-level fusion. The performance of emotion recognition algorithm is compared with the validation of human decision makers.
Automatic emotion recognition has become an important trend in many artificial intelligence (AI) based applications and has been widely explored in recent years. Most research in the area of automated emotion recognition is based on facial expressions or speech signals. Although the influence of the emotional state on body movements is undeniable, this source of expression is still underestimated in automatic analysis. In this paper, we propose a novel method to recognise seven basic emotional states—namely, happy, sad, surprise, fear, anger, disgust and neutral—utilising body movement. We analyse motion capture data under seven basic emotional states recorded by professional actor/actresses using Microsoft Kinect v2 sensor. We propose a new representation of affective movements, based on sequences of body joints. The proposed algorithm creates a sequential model of affective movement based on low level features inferred from the spacial location and the orientation of joints within the tracked skeleton. In the experimental results, different deep neural networks were employed and compared to recognise the emotional state of the acquired motion sequences. The experimental results conducted in this work show the feasibility of automatic emotion recognition from sequences of body gestures, which can serve as an additional source of information in multimodal emotion recognition.
of 78% has been obtained, which belongs to the happiness voice signals. The proposed method has 13.78% higher average recognition rate and 28.1% higher best recognition rate compared to the linear discriminant analysis as well as 6.58% higher average recognition rate than the deep neural networks results, both of which have been implemented on the same database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.