Understanding emotional human behavior in its multimodal and continuous aspect is necessary for studying human machine interaction and creating constituent social agents. As a first step, we propose a system for continuous emotional behavior recognition expressed by people during communication based on their gesture and their whole body dynamical motion. The features used to classify the motion are inspired by the Laban Movement Analysis entities and are mapped onto the well‐known Russell Circumplex Model . We choose a specific case study that corresponds to an ideal case of multimodal behavior that emphasizes the body motion expression: theater performance. Using a trained neural network and annotated data, our system is able to describe the motion behavior as trajectories on the Russell Circumplex Model diagram during theater performances over time. This work contributes to the understanding of human behavior and expression and is a first step through a complete continuous emotion recognition system whose next step will be adding facial expressions. Copyright © 2016 John Wiley & Sons, Ltd.
Learning couple dance such as salsa is challenging as it requires to understand and assimilate all the dance skills (guidance, rhythm, style) correctly. Salsa is traditionally learned by attending a dancing class with a teacher and practice with a partner, the difficulty to access such classes though, and the variability of dance environment can impact the learning process. Understanding how people learn using a virtual reality platform could bring interesting knowledge in motion analysis and can be the first step toward a complementary learning system at home. In this paper, we propose an interactive learning application in the form of a virtual reality game, that aims to help the user to improve its salsa dancing skills. The application was designed upon previous literature and expert discussion and has different components that simulate salsa dance: A virtual partner with interactive control to dance with, visual and haptic feedback, and a game mechanic with dance tasks. This application is tested on a two-class panel of 20 regular and 20 non-dancers, and their learning is evaluated and analyzed through the extraction of Musical Motion Features and the Laban Motion Analysis system. Both motion analysis frameworks were compared prior and after training and show a convergence of the profile of non-dancer toward the profile of regular dancers, which validates the learning process. The work presented here has profound implications for future studies of motion analysis, couple dance learning, and human-human interaction.
Learning couple dance such as Salsa is a challenge for the modern human as it requires to assimilate and understand correctly all the dance parameters. Traditionally learned with a teacher, some situation and the variability of dance class environment can impact the learning process. Having a better understanding of what is a good salsa dancer from motion analysis perspective would bring interesting knowledge and can complement better learning. In this paper, we propose a set of music and interaction based motion features to classify salsa dancer couple performance in three learning states (beginner, intermediate and expert). These motion features are an interpretation of components given via interviews from teacher and professionals and other dance features found in systematic review of papers. For the presented study, a motion capture database (SALSA) has been recorded of 26 different couples with three skill levels dancing on 10 different tempos (260 clips). Each recorded clips contains a basic steps sequence and an extended improvisation sequence during two minutes in total at 120 frame per second. Each of the 27 motion features have been computed on a sliding window that corresponds to the 8 beats reference for dance. Different multiclass classifier has been tested, mainly k-nearest neighbours, Random forest and Support Vector Machine, with an accuracy result of classification up to 81% for three levels and 92% for two levels. A later feature analysis validates 23 out of 27 proposed features. The work presented here has profound implications for future studies of motion analysis, couple dance learning and human-human interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.