This article reports the impact of the degree of personalization and individualization of users' avatars as well as the impact of the degree of immersion on typical psychophysical factors in embodied Virtual Environments. We investigated if and how virtual body ownership (including agency), presence, and emotional response are influenced depending on the specific look of users' avatars, which varied between (1) a generic hand-modeled version, (2) a generic scanned version, and (3) an individualized scanned version. The latter two were created using a state-of-the-art photogrammetry method providing a fast 3D-scan and post-process workflow. Users encountered their avatars in a virtual mirror metaphor using two VR setups that provided a varying degree of immersion, (a) a large screen surround projection (L-shape part of a CAVE) and (b) a head-mounted display (HMD). We found several significant as well as a number of notable effects. First, personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts, even if the latter were generated by the same photogrammetry process and hence could be valued as equal in terms of the degree of realism and graphical quality. Second, the degree of immersion significantly increases the body ownership, agency, as well as the feeling of presence. These results substantiate the value of personalized avatars resembling users' real-world appearances as well as the value of the deployed scanning process to generate avatars for VR-setups where the effect strength might be substantial, e.g., in social Virtual Reality (VR) or in medical VR-based therapies relying on embodied interfaces. Additionally, our results also strengthen the value of fully immersive setups which, today, are accessible for a variety of applications due to the widely available consumer HMDs.
This paper investigates the effect of avatar realism on embodiment and social interactions in Virtual Reality (VR). We compared abstract avatar representations based on a wooden mannequin with high fidelity avatars generated from photogrammetry 3D scan methods. Both avatar representations were alternately applied to participating users and to the virtual counterpart in dyadic social encounters to examine the impact of avatar realism on selfembodiment and social interaction quality. Users were immersed in a virtual room via a head mounted display (HMD). Their full-body movements were tracked and mapped to respective movements of Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
Fig. 1. Factors addressed by the VEQ. Virtual body ownership (a) which refers to the perception of perceiving a virtual body as the own, as the source of all sensations; Agency (b), which refers to the perception of control over a virtual body, and through that, control over events in the environment; Change (c) which refers to the change in the perceived body schema due to the stimulation. The user is depicted in grey; the virtual avatar is depicted in orange. Abstract-User embodiment is important for many virtual reality (VR) applications, for example, in the context of social interaction, therapy, training, or entertainment. However, there is no data-driven and validated instrument to empirically measure the perceptual aspects of embodiment, necessary to reliably evaluate this important phenomenon. To provide a method to assess components of virtual embodiment in a reliable and consistent fashion, we constructed a Virtual Embodiment Questionnaire (VEQ). We reviewed previous literature to identify applicable constructs and questionnaire items, and performed a confirmatory factor analysis (CFA) on the data from three experiments (N = 196). The analysis confirmed three factors: (1) ownership of a virtual body, (2) agency over a virtual body, and (3) the perceived change in the body schema. A fourth study (N = 22) was conducted to confirm the reliability and validity of the scale, by investigating the impacts of latency and latency jitter present in the simulation. We present the proposed scale and study results and discuss resulting implications.
In this paper we present a novel experiment to explore the impact of avatar realism on the illusion of virtual body ownership (IVBO) in immersive virtual environments, with full-body avatar embodiment and freedom of movement. We evaluated four distinct avatars (a humanoid robot, a block-man, and both male and female human adult) presenting an increasing level of anthropomorphism in their detailed compositions Our results revealed that each avatar elicited a relatively high level of illusion. However both machine-like and cartoon-like avatars elicited an equivalent IVBO, slightly superior to the human-ones. A realistic human appearance is therefore not a critical top-down factor of IVBO, and could lead to an Uncanney Valley effect.
Figure 1: We generate realistic virtual humans from real persons through multi-view stereo scanning. The resulting characters are ready to be animated through a skeletal rig and facial blendshapes, and are compatible with standard graphics and VR engines. The whole reconstruction process requires only minimal user input and takes less than ten minutes. ABSTRACTIn this paper we present a complete pipeline to create ready-toanimate virtual humans by fitting a template character to a point set obtained by scanning a real person using multi-view stereo reconstruction. Our virtual humans are built upon a holistic character model and feature a detailed skeleton, fingers, eyes, teeth, and a rich set of facial blendshapes. Furthermore, due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing for computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing whole characters in less than ten minutes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.