Performing online complementary motor adjustments is quintessential to joint actions since it allows interacting people to coordinate efficiently and achieve a common goal. We sought to determine whether, during dyadic interactions, signaling strategies and simulative processes are differentially implemented on the basis of the interactional role played by each partner. To this aim, we recorded the kinematics of the right hand of pairs of individuals who were asked to grasp as synchronously as possible a bottle-shaped object according to an imitative or complementary action schedule. Task requirements implied an asymmetric role assignment so that participants performed the task acting either as (1) Leader (i.e., receiving auditory information regarding the goal of the task with indications about where to grasp the object) or (2) Follower (i.e., receiving instructions to coordinate their movements with their partner's by performing imitative or complementary actions). Results showed that, when acting as Leader, participants used signaling strategies to enhance the predictability of their movements. In particular, they selectively emphasized kinematic parameters and reduced movement variability to provide the partner with implicit cues regarding the action to be jointly performed. Thus, Leaders make their movements more "communicative" even when not explicitly instructed to do so. Moreover, only when acting in the role of Follower did participants tend to imitate the Leader, even in complementary actions where imitation is detrimental to joint performance. Our results show that mimicking and signaling are implemented in joint actions according to the interactional role of the agent, which in turn is reflected in the kinematics of each partner.
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first-or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors.
The mental representation of one's body typically implies the continuity of its parts. Here, we used immersive virtual reality to explore whether mere observation of visual discontinuity between the hand and limb of an avatar could influence a person's sense of ownership of the virtual body (feeling of ownership, FO) and being the agent of its actions (vicarious agency, VA). In experiment 1, we tested whether placing different amounts of visual discontinuity between a virtual hand and limb differently modulate the perceived FO and VA. Participants passively observed from a first-person perspective four different versions of a virtual limb: (1) a full limb; a hand detached from the proximal part of the limb because of deletion of (2) the wrist; (3) the wrist and forearm; (4) and the wrist, forearm and elbow. After observing the static or moving virtual limb, participants reported their feeling of ownership (FO) and vicarious agency (VA) over the hand. We found that even a small visual discontinuity between the virtual hand and arm significantly decreased participants' FO over the hand during observation of the static limb. Moreover, in the same condition, we found that passive observation of the avatar's actions induced a decrease in both FO and VA. We replicated the same results in a second study (experiment 2) where we investigated the modulation of FO and VA by comparing the visual body discontinuity with a condition in which the virtual limb was partially occluded. Our data show that mere observation of limb discontinuity can change a person's ownership and agency over a virtual body observed from a first-person perspective, even in the absence of any multisensory stimulation of the real body. These results shed new light on the role of body visual continuity in modulating self-awareness and agency in immersive virtual reality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.