Accurate knowledge about size and shape of the body derived from somatosensation is important to locate one’s own body in space. The internal representation of these body metrics (body model) has been assessed by contrasting the distortions of participants’ body estimates across two types of tasks (localization task vs. template matching task). Here, we examined to which extent this contrast is linked to the human body. We compared participants’ shape estimates of their own hand and non-corporeal objects (rake, post-it pad, CD-box) between a localization task and a template matching task. While most items were perceived accurately in the visual template matching task, they appeared to be distorted in the localization task. All items’ distortions were characterized by larger length underestimation compared to width. This pattern of distortion was maintained across orientation for the rake item only, suggesting that the biases measured on the rake were bound to an item-centric reference frame. This was previously assumed to be the case only for the hand. Although similar results can be found between non-corporeal items and the hand, the hand appears significantly more distorted than other items in the localization task. Therefore, we conclude that the magnitude of the distortions measured in the localization task is specific to the hand. Our results are in line with the idea that the localization task for the hand measures contributions of both an implicit body model that is not utilized in landmark localization with objects and other factors that are common to objects and the hand.Electronic supplementary materialThe online version of this article (doi:10.1007/s00221-015-4221-0) contains supplementary material, which is available to authorized users.
In neurology and psychiatry the detailed study of illusory own body perceptions has suggested close links between bodily processing and self-consciousness. One such illusory own body perception is heautoscopy where patients have the sensation of being reduplicated and to exist at two or even more locations. In previous experiments, using a video head-mounted display, self-location and self-identification were manipulated by applying conflicting visuo-tactile information. Yet the experienced singularity of the self was not affected, i.e., participants did not experience having multiple bodies or selves. In two experiments presented in this paper, we investigated self-location and self-identification while participants saw two virtual bodies (video-generated in study 1 and 3D computer generated in study 2) that were stroked either synchronously or asynchronously with their own body. In both experiments, we report that self-identification with two virtual bodies was stronger during synchronous stroking. Furthermore, in the video generated setup with synchronous stroking participants reported a greater feeling of having multiple bodies than in the control conditions. In study 1, but not in study 2, we report that self-location – measured by anterior posterior drift – was significantly shifted towards the two bodies in the synchronous condition only. Self-identification with two bodies, the sensation of having multiple bodies, and the changes in self-location show that the experienced singularity of the self can be studied experimentally. We discuss our data with respect to ownership for supernumerary hands and heautoscopy. We finally compare the effects of the video and 3D computer generated head-mounted display technology and discuss the possible benefits of using either technology to induce changes in illusory self-identification with a virtual body.
BackgroundWhen we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other.Principal FindingsIn experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world.ConclusionsTaken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.
Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the “neutral” category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios.
The elongation of a figure or object can induce a perceptual bias regarding its area or volume estimation. This bias is notable in Piagetian experiments in which participants tend to consider elongated cylinders to contain more liquid than shorter cylinders of equal volume. We investigated whether similar perceptual biases could be found in volume judgments of surrounding indoor spaces and whether those judgments were viewpoint dependent. Participants compared a variety of computer-generated rectangular rooms with a square room in a psychophysical task. We found that the elongation bias in figures or objects was also present in volume comparison judgments of indoor spaces. Further, the direction of the bias (larger or smaller) depended on the observer's viewpoint. Similar results were obtained from a monoscopic computer display (Experiment 1) and stereoscopic head-mounted display with head tracking (Experiment 2). We used generalized linear mixed-effect models to model participants' volume judgments using a function of room depth and width. A good fit to the data was found when applying weight on the depth relative to the width, suggesting that participants' judgments were biased by egocentric properties of the space. We discuss how biases in comparative volume judgments of rooms might reflect the use of simplified strategies, such as anchoring on one salient dimension of the space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.