Whenever we observe a movement of a conspecific, our mirror neuron system becomes activated, urging us to imitate the observed movement. However, because such automatic imitation is not always appropriate, an inhibitive component keeping us from imitating everything we see seems crucial for an effective social behavior. This becomes evident from neuropsychological conditions like echopraxia, in which this suppression is absent. Here, we unraveled the neurodynamics underlying this proposed inhibition of automatic imitation by measuring and manipulating brain activity during the execution of a stimulus-response compatibility paradigm. Within the identified connectivity network, right middle/inferior frontal cortex sends neural input concerning general response inhibition to right premotor cortex, which is involved in automatic imitation. Subsequently, the fully prepared imitative response is sent to left opercular cortex that functions as a final gating mechanism for intentional imitation. We propose an informed neurocognitive model of inhibition of automatic imitation, suggesting a functional dissociation between automatic and intentional imitation.
Crossmodal binding usually relies on bottom-up stimulus characteristics such as spatial and temporal correspondence. However, in case of ambiguity the brain has to decide whether to combine or segregate sensory inputs. We hypothesise that widespread, subtle forms of synesthesia provide crossmodal mapping patterns which underlie and influence multisensory perception. Our aim was to investigate if such a mechanism plays a role in the case of pitch-size stimulus combinations. Using a combination of psychophysics and ERPs, we could show that despite violations of spatial correspondence, the brain specifically integrates certain stimulus combinations which are congruent with respect to our hypothesis of pitch-size synesthesia, thereby impairing performance on an auditory spatial localisation task (Ventriloquist effect). Subsequently, we perturbed this process by functionally disrupting a brain area known for its role in multisensory processes, the right intraparietal sulcus, and observed how the Ventriloquist effect was abolished, thereby increasing behavioural performance. Correlating behavioural, TMS and ERP results, we could retrace the origin of the synesthestic pitch-size mappings to a right intraparietal involvement around 250 ms. The results of this combined psychophysics, TMS and ERP study provide evidence for shifting the current viewpoint on synesthesia more towards synesthesia being at the extremity of a spectrum of normal, adaptive perceptual processes, entailing close interplay between the different sensory systems. Our results support this spectrum view of synesthesia by demonstrating that its neural basis crucially depends on normal multisensory processes.
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Practice and training usually lead to performance increase in a given task. In addition, a shift from intentional toward more automatic processing mechanisms is often observed. It is currently debated whether automatic and intentional processing is subserved by the same or by different mechanism(s), and whether the same or different regions in the brain are recruited. Previous correlational evidence provided by behavioral, neuroimaging, modeling, and neuropsychological studies addressing this question yielded conflicting results. Here we used transcranial magnetic stimulation (TMS) to compare the causal influence of disrupting either left or right parietal cortex during automatic and intentional numerical processing, as reflected by the size congruity effect and the numerical distance effect, respectively. We found a functional hemispheric asymmetry within parietal cortex with only the TMS-induced right parietal disruption impairing both automatic and intentional numerical processing. In contrast, disrupting the left parietal lobe with TMS, or applying sham stimulation, did not affect performance during automatic or intentional numerical processing. The current results provide causal evidence for the functional relevance of right, but not left, parietal cortex for intentional, and automatic numerical processing, implying that at least within the parietal cortices, automatic, and intentional numerical processing rely on the same underlying hemispheric lateralization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.