This article reports the impact of the degree of personalization and individualization of users' avatars as well as the impact of the degree of immersion on typical psychophysical factors in embodied Virtual Environments. We investigated if and how virtual body ownership (including agency), presence, and emotional response are influenced depending on the specific look of users' avatars, which varied between (1) a generic hand-modeled version, (2) a generic scanned version, and (3) an individualized scanned version. The latter two were created using a state-of-the-art photogrammetry method providing a fast 3D-scan and post-process workflow. Users encountered their avatars in a virtual mirror metaphor using two VR setups that provided a varying degree of immersion, (a) a large screen surround projection (L-shape part of a CAVE) and (b) a head-mounted display (HMD). We found several significant as well as a number of notable effects. First, personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts, even if the latter were generated by the same photogrammetry process and hence could be valued as equal in terms of the degree of realism and graphical quality. Second, the degree of immersion significantly increases the body ownership, agency, as well as the feeling of presence. These results substantiate the value of personalized avatars resembling users' real-world appearances as well as the value of the deployed scanning process to generate avatars for VR-setups where the effect strength might be substantial, e.g., in social Virtual Reality (VR) or in medical VR-based therapies relying on embodied interfaces. Additionally, our results also strengthen the value of fully immersive setups which, today, are accessible for a variety of applications due to the widely available consumer HMDs.
This paper investigates the effect of avatar realism on embodiment and social interactions in Virtual Reality (VR). We compared abstract avatar representations based on a wooden mannequin with high fidelity avatars generated from photogrammetry 3D scan methods. Both avatar representations were alternately applied to participating users and to the virtual counterpart in dyadic social encounters to examine the impact of avatar realism on selfembodiment and social interaction quality. Users were immersed in a virtual room via a head mounted display (HMD). Their full-body movements were tracked and mapped to respective movements of Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
Figure 1: We generate realistic virtual humans from real persons through multi-view stereo scanning. The resulting characters are ready to be animated through a skeletal rig and facial blendshapes, and are compatible with standard graphics and VR engines. The whole reconstruction process requires only minimal user input and takes less than ten minutes. ABSTRACTIn this paper we present a complete pipeline to create ready-toanimate virtual humans by fitting a template character to a point set obtained by scanning a real person using multi-view stereo reconstruction. Our virtual humans are built upon a holistic character model and feature a detailed skeleton, fingers, eyes, teeth, and a rich set of facial blendshapes. Furthermore, due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing for computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing whole characters in less than ten minutes.
Figure 1: Left: The basic virtual mirror scenario consists of an empty room and a simplistic mirror avatar. Right: The extended scenario employed in the experiment, where the target movement is shown by a semi-transparent blue "ghost character". AbstractLatency between a user's movement and visual feedback is inevitable in every Virtual Reality application, as signal transmission and processing take time. Unfortunately, a high end-to-end latency impairs perception and motor performance. While it is possible to reduce feedback delay to tens of milliseconds, these delays will never completely vanish. Currently, there is a gap in literature regarding the impact of feedback delays on perception and motor performance as well as on their interplay in virtual environments employing full-body avatars. With the present study at hand, we address this gap by performing a systematic investigation of different levels of delay across a variety of perceptual and motor tasks during full-body action inside a Cave Automatic Virtual Environment. We presented participants with their virtual mirror image, which responded to their actions with feedback delays ranging from 45 to 350 ms. We measured the impact of these delays on motor performance, sense of agency, sense of body ownership and simultaneity perception by means of psychophysical procedures. Furthermore, we looked at interaction effects between these aspects to identify possible dependencies. The results show that motor performance and simultaneity perception are affected by latencies above 75 ms. Although sense of agency and body ownership only decline at a latency higher than 125 ms, and deteriorate for a latency greater than 300 ms, they do not break down completely even at the highest tested delay. Interestingly, participants perceptually infer the presence of delays more from their motor error in the task than from the actual level of delay. Whether or not participants notice a delay in a virtual environment might therefore depend on the motor task and their performance rather than on the actual delay.
Figure 1: Real-time feedback using a virtual mirror requires an immersive Virtual Reality environment that provides full-body motion capturing, motion analysis, and realistic character rendering at a low end-to-end latency. AbstractVirtual Reality (VR) has the potential to support motor learning in ways exceeding beyond the possibilities provided by real world environments. New feedback mechanisms can be implemented that support motor learning during the performance of the trainee and afterwards as a performance review. As a consequence, VR environments excel in controlled evaluations, which has been proven in many other application scenarios.However, in the context of motor learning of complex tasks, including full-body movements, questions regarding the main technical parameters of such a system, in particular that of the required maximum latency, have not been addressed in depth. To fill this gap, we propose a set of requirements towards VR systems for motor learning, with a special focus on motion capturing and rendering. We then assess and evaluate state-of-the-art techniques and technologies for motion capturing and rendering, in order to provide data on latencies for different setups. We focus on the end-to-end latency of the overall system, and present an evaluation of an exemplary system that has been developed to meet these requirements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.