Predictive mechanisms are essential to successfully interact with the environment and to compensate for delays in the transmission of neural signals. However, whether and how we predict multisensory action outcomes remains largely unknown. Here we investigated the existence of multisensory predictive mechanisms in a context where actions have outcomes in different modalities. During fMRI data acquisition auditory, visual and auditory-visual stimuli were presented in active and passive conditions. In the active condition, a self-initiated button press elicited the stimuli with variable short delays (0-417ms) between action and outcome, and participants had to detect the presence of a delay for auditory or visual outcome (task modality). In the passive condition, stimuli appeared automatically, and participants had to detect the number of stimulus modalities (unimodal/bimodal). For action consequences compared to identical but unpredictable control stimuli we observed suppression of the blood oxygen level depended (BOLD) response in a broad network including bilateral auditory and visual cortices. This effect was independent of task modality or stimulus modality and strongest for trials where no delay was detected (undetected
Action-feedback monitoring is essential to ensure meaningful interactions with the external world. This process involves generating efference copy-based sensory predictions and comparing these with the actual action-feedback. As neural correlates of comparator processes, previous fMRI studies have provided heterogeneous results, including the cerebellum, angular and middle temporal gyrus. However, these studies usually comprised only self-generated actions. Therefore, they might have induced not only action-based prediction errors, but also general sensory mismatch errors. Here, we aimed to disentangle these processes using a custom-made fMRI-compatible movement device, generating active and passive hand movements with identical sensory feedback. Online visual feedback of the hand was presented with a variable delay. Participants had to judge whether the feedback was delayed. Activity in the right cerebellum correlated more positively with delay in active than in passive trials. Interestingly, we also observed activation in the angular and middle temporal gyri, but across both active and passive conditions. This suggests that the cerebellum is a comparator area specific to voluntary action, whereas angular and middle temporal gyri seem to detect more general intersensory conflict. Correlations with behavior and cerebellar activity nevertheless suggest involvement of these temporoparietal areas in processing and awareness of temporal discrepancies in action-feedback monitoring.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Predicting the sensory consequences of our own actions contributes to efficient sensory processing and might help distinguish the consequences of self-versus externally generated actions. Previous research using unimodal stimuli has provided evidence for the existence of a forward model, which explains how such sensory predictions are generated and used to guide behavior. However, whether and how we predict multisensory action outcomes remains largely unknown. Here, we investigated this question in two behavioral experiments. In Experiment 1, we presented unimodal (visual or auditory) and bimodal (visual and auditory) sensory feedback with various delays after a self-initiated buttonpress. Participants had to report whether they detected a delay between their buttonpress and the stimulus in the predefined task modality. In Experiment 2, the sensory feedback and task were the same as in Experiment 1, but in half of the trials the action was externally generated. We observed enhanced delay detection for bimodal relative to unimodal trials, with better performance in general for actively generated actions. Furthermore, in the active condition, the bimodal advantage was largest when the stimulus in the task-irrelevant modality was not delayed-that is, when it was time-contiguous with the actionas compared to when both the task-relevant and task-irrelevant modalities were delayed. This specific enhancement for trials with a nondelayed task-irrelevant modality was absent in the passive condition. These results suggest that a forward model creates predictions for multiple modalities, and consequently contributes to multisensory interactions in the context of action.
Adaptation to delays between actions and sensory feedback is important for efficiently interacting with our environment. Adaptation may rely on predictions of action-feedback pairing (motor-sensory component), or predictions of tactile-proprioceptive sensation from the action and sensory feedback of the action (inter-sensory component). Reliability of temporal information might differ across sensory feedback modalities (e.g. auditory or visual), which in turn influences adaptation. Here, we investigated the role of motor-sensory and inter-sensory components on sensorimotor temporal recalibration for motor-auditory (button press-tone) and motor-visual (button press-Gabor patch) events. In the adaptation phase of the experiment, action-feedback pairs were presented with systematic temporal delays (0 ms or 150 ms). In the subsequent test phase, audio/visual feedback of the action were presented with variable delays. The participants were then asked whether they detected a delay. To disentangle motor-sensory from inter-sensory component, we varied movements (active button press or passive depression of button) at adaptation and test. Our results suggest that motor-auditory recalibration is mainly driven by the motor-sensory component, whereas motor-visual recalibration is mainly driven by the inter-sensory component. Recalibration transferred from vision to audition, but not from audition to vision. These results indicate that motor-sensory and inter-sensory components contribute to recalibration in a modality-dependent manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.