The fact that action observation, motor imagery and execution are associated with partially overlapping increases in parieto-frontal areas has been interpreted as evidence for reliance of these behaviors on a common system of motor representations. However, studies that include all three conditions within a single paradigm are rare, and consequently, there is a dearth of knowledge concerning the distinct mechanisms involved in these functions. Here we report key differences in neural representations subserving observation, imagery, and synchronous imitation of a repetitive bimanual finger-tapping task using fMRI under conditions in which visual stimulation is carefully controlled. Relative to rest, observation, imagery, and synchronous imitation are all associated with widespread increases in cortical activity. Importantly, when effects of visual stimulation are properly controlled, each of these conditions is found to have its own unique neural signature. Relative to observation or imagery, synchronous imitation shows increased bilateral activity along the central sulcus (extending into precentral and postcentral gyri), in the cerebellum, supplementary motor area (SMA), parietal operculum, and several motor-related subcortical areas. No areas show greater increases for imagery vs. synchronous imitation; however, relative to synchronous imitation, observation is associated with greater increases in caudal SMA activity than synchronous imitation. Compared to observation, imagery increases activation in pre-SMA and left inferior frontal cortex, while no areas show the inverse effect. Region-of-interest (ROI) analyses reveal that areas involved in bimanual open-loop movements respond most to synchronous imitation (primary sensorimotor, classic SMA, and cerebellum), and less vigorously to imagery and observation. The differential activity between conditions suggests an alternative hierarchical model in which these behaviors all rely on partially independent mechanisms.
Organisms have evolved sensory mechanisms to extract pertinent information from their environment, enabling them to assess their situation and act accordingly. For social organisms travelling in groups, like the fish in a school or the birds in a flock, sharing information can further improve their situational awareness and reaction times. Data on the benefits and costs of social coordination, however, have largely allowed our understanding of why collective behaviours have evolved to outpace our mechanistic knowledge of how they arise. Recent studies have begun to correct this imbalance through fine-scale analyses of group movement data. One approach that has received renewed attention is the use of information theoretic (IT) tools like mutual information , transfer entropy and causation entropy , which can help identify causal interactions in the type of complex, dynamical patterns often on display when organisms act collectively. Yet, there is a communications gap between studies focused on the ecological constraints and solutions of collective action with those demonstrating the promise of IT tools in this arena. We attempt to bridge this divide through a series of ecologically motivated examples designed to illustrate the benefits and challenges of using IT tools to extract deeper insights into the interaction patterns governing group-level dynamics. We summarize some of the approaches taken thus far to circumvent existing challenges in this area and we conclude with an optimistic, yet cautionary perspective.
Can driver steering behaviors, such as a lane change, be executed without visual feedback? In a recent study with a fixed-base driving simulator, drivers failed to execute the return phase of a lane change when steering without vision, resulting in systematic final heading errors biased in the direction of the lane change. Here we challenge the generality of that finding. Suppose that, when asked to perform a lane (position) change, drivers fail to recognize that a heading change is required to make a lateral position change. However, given an explicit path, the necessary heading changes become apparent. Here we demonstrate that when heading requirements are made explicit, drivers appropriately implement the return phase. More importantly, by using an electric vehicle outfitted with a portable virtual reality system, we also show that valid inertial information (i.e., vestibular and somatosensory cues) enables accurate steering behavior when vision is absent. Thus, the failure to properly execute a lane change in a driving simulator without a moving base does not present a fundamental problem for feed-forward driving behavior.
Self-motion through a three-dimensional array of objects creates a radial flow pattern on the retina. We superimposed a simulated object moving in depth on such a flow pattern to investigate the effect of the flow pattern on judgments of both the time to collision (TTC) with an approaching object and the trajectory of that object. Our procedure allowed us to decouple the direction and speed of simulated self motion-in-depth (MID) from the direction and speed of simulated object MID. In Experiment 1 we found that objects with the same closing speed were perceived to have a higher closing speed when self-motion and object-motion were in the same direction and a lower closing speed when they were in the opposite direction. This effect saturated rapidly as the ratio between the speeds of self-motion and object-motion was increased. In Experiment 2 we found that the perceived direction of object-MID was shifted towards the focus of expansion of the flow pattern. In Experiments 3 and 4 we found that the erroneous biases in perceived speed and direction produced by simulated self-motion were significantly reduced when binocular information about MID was added. These findings suggest that the large body of research that has studied motion perception using stationary observers has limited applicability to situations in which both the observer and the object are moving.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.