Virtual Reality (VR) has been increasingly referred to as the “ultimate empathy machine” since it allows users to experience any situation from any point of view. However, empirical evidence supporting the claim that VR is a more effective method of eliciting empathy than traditional perspective-taking is limited. Two experiments were conducted in order to compare the short and long-term effects of a traditional perspective-taking task and a VR perspective-taking task (Study 1), and to explore the role of technological immersion when it comes to different types of mediated perspective-taking tasks (Study 2). Results of Study 1 show that over the course of eight weeks participants in both conditions reported feeling empathetic and connected to the homeless at similar rates, however, participants who became homeless in VR had more positive, longer-lasting attitudes toward the homeless and signed a petition supporting the homeless at a significantly higher rate than participants who performed a traditional perspective-taking task. Study 2 compared three different types of perspective-taking tasks with different levels of immersion (traditional vs. desktop computer vs. VR) and a control condition (where participants received fact-driven information about the homeless). Results show that participants who performed any type of perspective-taking task reported feeling more empathetic and connected to the homeless than the participants who only received information. Replicating the results from Study 1, there was no difference in self-report measures for any of the perspective-taking conditions, however, a significantly higher number of participants in the VR condition signed a petition supporting affordable housing for the homeless compared to the traditional and less immersive conditions. We discuss the theoretical and practical implications of these findings.
There have been decades of research on the usability and educational value of augmented reality. However, less is known about how augmented reality affects social interactions. The current paper presents three studies that test the social psychological effects of augmented reality. Study 1 examined participants’ task performance in the presence of embodied agents and replicated the typical pattern of social facilitation and inhibition. Participants performed a simple task better, but a hard task worse, in the presence of an agent compared to when participants complete the tasks alone. Study 2 examined nonverbal behavior. Participants met an agent sitting in one of two chairs and were asked to choose one of the chairs to sit on. Participants wearing the headset never sat directly on the agent when given the choice of two seats, and while approaching, most of the participants chose the rotation direction to avoid turning their heads away from the agent. A separate group of participants chose a seat after removing the augmented reality headset, and the majority still avoided the seat previously occupied by the agent. Study 3 examined the social costs of using an augmented reality headset with others who are not using a headset. Participants talked in dyads, and augmented reality users reported less social connection to their partner compared to those not using augmented reality. Overall, these studies provide evidence suggesting that task performance, nonverbal behavior, and social connectedness are significantly affected by the presence or absence of virtual content.
Virtual reality (VR) is a technology that is gaining traction in the consumer market. With it comes an unprecedented ability to track body motions. These body motions are diagnostic of personal identity, medical conditions, and mental states. Previous work has focused on the identifiability of body motions in idealized situations in which some action is chosen by the study designer. In contrast, our work tests the identifiability of users under typical VR viewing circumstances, with no specially designed identifying task. Out of a pool of 511 participants, the system identifies 95% of users correctly when trained on less than 5 min of tracking data per person. We argue these results show nonverbal data should be understood by the public and by researchers as personally identifying data.
Collaborative virtual environments (CVEs), wherein people can virtually interact with each other via avatars, are becoming increasingly prominent. However, CVEs differ in type of avatar representation and level of behavioral realism afforded to users. The present investigation compared the effect of behavioral realism on users' nonverbal behavior, self-presence, social presence, and interpersonal attraction during a dyadic interaction. Fifty-one dyads (aged 18 to 26) embodied either a full-bodied avatar with mapped hands and inferred arm movements, an avatar consisting of only a floating head and mapped hands, or a static full-bodied avatar. Planned contrasts compared the effect of behavioral realism against no behavioral realism, and compared the effect of low versus high behavioral realism. Results show that participants who embodied the avatar with only a floating head and hands experienced greater social presence, self-presence, and interpersonal attraction than participants who embodied a full-bodied avatar with mapped hands. In contrast, there were no significant differences on these measures between participants in the two mapped-hands conditions and those who embodied a static avatar. Participants in the static-avatar condition rotated their own physical head and hands significantly less than participants in the other two conditions during the dyadic interaction. Additionally, side-to-side head movements were negatively correlated with interpersonal attraction regardless of condition. We discuss implications of the finding that behavioral realism influences nonverbal behavior and communication outcomes.
This study focuses on the individual and joint contributions of two nonverbal channels (i.e., face and upper body) in avatar mediated-virtual environments. 140 dyads were randomly assigned to communicate with each other via platforms that differentially activated or deactivated facial and bodily nonverbal cues. The availability of facial expressions had a positive effect on interpersonal outcomes. More specifically, dyads that were able to see their partner’s facial movements mapped onto their avatars liked each other more, formed more accurate impressions about their partners, and described their interaction experiences more positively compared to those unable to see facial movements. However, the latter was only true when their partner’s bodily gestures were also available and not when only facial movements were available. Dyads showed greater nonverbal synchrony when they could see their partner’s bodily and facial movements. This study also employed machine learning to explore whether nonverbal cues could predict interpersonal attraction. These classifiers predicted high and low interpersonal attraction at an accuracy rate of 65%. These findings highlight the relative significance of facial cues compared to bodily cues on interpersonal outcomes in virtual environments and lend insight into the potential of automatically tracked nonverbal cues to predict interpersonal attitudes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.