Motion capture (mocap) technology is an efficient method for digitizing art performances, and is becoming increasingly popular in the preservation and dissemination of dance performances. Although technically the captured data can be of very high quality, dancing allows stylistic variations and improvisations that cannot be easily identified. The majority of motion analysis algorithms are based on ad-hoc quantitative metrics, thus do not usually provide insights on style qualities of a performance. In this work, we present a framework based on the principles of Laban Movement Analysis (LMA) that aims to identify style qualities in dance motions. The proposed algorithm uses a feature space that aims to capture the four LMA components (B ody , E ffort , S hape , S pace ), and can be subsequently used for motion comparison and evaluation. We have designed and implemented a prototype virtual reality simulator for teaching folk dances in which users can preview dance segments performed by a 3D avatar and repeat them. The user’s movements are captured and compared to the folk dance template motions; then, intuitive feedback is provided to the user based on the LMA components. The results demonstrate the effectiveness of our system, opening new horizons for automatic motion and dance evaluation processes.
Prediction of gaze behavior in gaming environments can be a tremendously useful asset to game designers, enabling them to improve gameplay, selectively increase visual fidelity, and optimize the distribution of computing resources. The use of saliency maps is currently being advocated as the method of choice for predicting visual attention, crucially under the assumption that no specific task is present. This is achieved by analyzing images for low-level features such as motion, contrast, luminance, etc. However, the majority of computer games are designed to be easily understood and pose a task readily apparent to most players. Our psychophysical experiment shows that in a task-oriented context such as gaming, the predictive power of saliency maps at design time can be weak. Thus, we argue that a more involved protocol utilizing eye tracking, as part of the computer game design cycle, can be sufficiently robust to succeed in predicting fixation behavior of players.
With the emergence of affordable 3D displays, stereoscopy is becoming a commodity. However, often users report discomfort even after brief exposures to stereo content. One of the main reasons is the conflict between vergence and accommodation that is caused by 3D displays. We investigate dynamic adjustment of stereo parameters in a scene using gaze data in order to reduce discomfort. In a user study, we measured stereo fusion times after abrupt manipulation of disparities using gaze data. We found that gaze-controlled manipulation of disparities can lower fusion times for large disparities. In addition we found that gaze-controlled disparity adjustment should be applied in a personalized manner and ideally performed only at the extremities or outside the comfort zone of subjects. These results provide important insight on the problems associated with fast disparity manipulation and are essential for developing appealing gaze-contingent and gaze-controlled applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.