As humans, we gather a wide range of information about other people from watching them move. A network of parietal, premotor, and occipitotemporal regions within the human brain, termed the action observation network (AON), has been implicated in understanding others' actions by means of an automatic matching process that links observed and performed actions. Current views of the AON assume a matching process biased towards familiar actions; specifically, those performed by conspecifics and present in the observer's motor repertoire. In this study, we test how this network responds to form and motion cues when observing natural human motion compared to rigid robotic-like motion across two independent functional neuroimaging experiments. In Experiment 1, we report the surprising finding that premotor, parietal, occipitotemporal regions respond more robustly to rigid, robot-like motion than natural human motion. In Experiment 2, we replicate and extend this finding by demonstrating that the same pattern of results emerges whether the agent is a human or a robot, which suggests the preferential response to robot-like motion is independent of the agent's form. These data challenge previous ideas about AON function by demonstrating that the core nodes of this network can be flexibly engaged by novel, unfamiliar actions performed by both human and non-human agents. As such, these findings suggest that the AON is sensitive to a broader range of action features beyond those that are simply familiar.
This research validated and extended the Movement Imagery QuestionnaireRevised (MIQ-R; Hall & Martin, 1997). Study 1 (N = 400) examined the MIQ-R's factor structure via multitrait-multimethod confirmatory factor analysis. The questionnaire was then modified in Study 2 (N = 370) to separately assess the ease of imaging external visual imagery and internal visual imagery, as well as kinesthetic imagery (termed the Movement Imagery Questionnaire-3; MIQ-3). Both Studies 1 and 2 found that a correlated-traits correlated-uniqueness model provided the best fit to the data, while displaying gender invariance and no significant differences in latent mean scores across gender. Study 3 (N = 97) demonstrated the MIQ-3's predictive validity revealing the relationships between imagery ability and observational learning use. Findings highlight the method effects that occur by assessing each type of imagery ability using the same four movements and demonstrate that better imagers report greater use of observational learning.
Spontaneous mimicry of other people's actions serves an important social function, enhancing affiliation and social interaction. This mimicry can be subtly modulated by different social contexts. We recently found behavioral evidence that direct eye gaze rapidly and specifically enhances mimicry of intransitive hand movements (Wang et al., 2011). Based on past findings linking medial prefrontal cortex (mPFC) to both eye contact and the control of mimicry, we hypothesized that mPFC might be the neural origin of this behavioral effect. The present study aimed to test this hypothesis. During functional magnetic resonance imaging (fMRI) scanning, 20 human participants performed a simple mimicry or no-mimicry task, as previously described (Wang et al., 2011), with direct gaze present on half of the trials. As predicted, fMRI results showed that performing the task activated mirror systems, while direct gaze and inhibition of the natural tendency to mimic both engaged mPFC. Critically, we found an interaction between mimicry and eye contact in mPFC, superior temporal sulcus (STS) and inferior frontal gyrus. We then used dynamic causal modeling to contrast 12 possible models of information processing in this network. Results supported a model in which eye contact controls mimicry by modulating the connection strength from mPFC to STS. This suggests that mPFC is the originator of the gaze-mimicry interaction and that it modulates sensory input to the mirror system. Thus, our results demonstrate how different components of the social brain work together to on-line control mimicry according to the social context.
A hallmark of human social interaction is the ability to consider other people's mental states, such as what they see, believe, or desire. Prior neuroimaging research has predominantly investigated the neural mechanisms involved in computing one's own or another person's perspective and largely ignored the question of perspective selection. That is, which brain regions are engaged in the process of selecting between self and other perspectives? To address this question, the current fMRI study used a behavioral paradigm that required participants to select between competing visual perspectives. We provide two main extensions to current knowledge. First, we demonstrate that brain regions within dorsolateral prefrontal and parietal cortices respond in a viewpoint-independent manner during the selection of task-relevant over task-irrelevant perspectives. More specifically, following the computation of two competing visual perspectives, common regions of frontoparietal cortex are engaged to select one's own viewpoint over another's as well as select another's viewpoint over one's own. Second, in the absence of conflict between the content of competing perspectives, we showed a reduced engagement of frontoparietal cortex when judging another's visual perspective relative to one's own. This latter finding provides the first brain-based evidence for the hypothesis that, in some situations, another person's perspective is automatically and effortlessly computed, and thus, less cognitive control is required to select it over one's own perspective. In doing so, we provide stronger evidence for the claim that we not only automatically compute what other people see but also, in some cases, we compute this even before we are explicitly aware of our own perspective.
Research in social neuroscience has primarily focused on carving up cognition into distinct pieces, as a function of mental process, neural network or social behaviour, while the need for unifying models that span multiple social phenomena has been relatively neglected. Here we present a novel framework that treats social cognition as a case of semantic cognition, which provides a neurobiologically constrained and generalizable framework, with clear, testable predictions regarding sociocognitive processing in the context of both health and disease. According to this framework, social cognition relies on two principal systems of representation and control. These systems are neuroanatomically and functionally distinct, but interact to (1) enable development of foundational, conceptual-level knowledge and (2) regulate access to this information in order to generate flexible and context-appropriate social behaviour. The Social Semantics framework shines new light on the mechanisms of social information processing by maintaining as much explanatory power as prior models of social cognition, whilst remaining simpler, by virtue of relying on fewer components that are "tuned" towards social interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.