Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time.
Temporal orienting leads to well-documented behavioural benefits for sensory events occurring at the anticipated moment. However, the consequences of temporal orienting in cross-modal contexts are still unclear. On the one hand, some studies using audio-tactile paradigms suggest that attentional orienting in time and modality are a closely coupled system, in which temporal orienting dominates modality orienting, similar to what happens in cross-modal spatial attention. On the other hand, recent findings using a visuo-tactile paradigm suggest that attentional orienting in time can unfold independently in each modality, leading to cross-modal decoupling. In the present study, we investigated if cross-modal decoupling in time can be extrapolated to audio-tactile contexts. If so, decoupling might represent a general property of cross-modal attention in time. To this end, we used a speeded discrimination task in which we manipulated the probability of target presentation in time and modality. In each trial, a manipulation of time-based expectancy was used to guide participants' attention to task-relevant events, either tactile or auditory, at different points in time. In two experiments, we found that participants generally showed enhanced behavioural performance at the most likely onset time of each modality and no evidence for coupling. This pattern supports the hypothesis that cross-modal decoupling could be a general phenomenon in temporal orienting.
A popular way to analyze resting-state electroencephalography (EEG) and magneto encephalography (MEG) data is to treat them as a functional network in which sensors are identified with nodes and the interaction between channel time series and the network connections. Although conceptually appealing, the network-theoretical approach to sensor-level EEG and MEG data is challenged by the fact that EEG and MEG time series are mixtures of source activity. It is, therefore, of interest to assess the relationship between functional networks of source activity and the ensuing sensor-level networks. Since these topological features are of high interest in experimental studies, we address the question of to what extent the network topology can be reconstructed from sensor-level functional connectivity (FC) measures in case of MEG data. Simple simulations that consider only a small number of regions do not allow to assess network properties; therefore, we use a diffusion magnetic resonance imaging-constrained whole-brain computational model of resting-state activity. Our motivation lies behind the fact that still many contributions found in the literature perform network analysis at sensor level, and we aim at showing the discrepancies between source- and sensor-level network topologies by using realistic simulations of resting-state cortical activity. Our main findings are that the effect of field spread on network topology depends on the type of interaction (instantaneous or lagged) and leads to an underestimation of lagged FC at sensor level due to instantaneous mixing of cortical signals, instantaneous interaction is more sensitive to field spread than lagged interaction, and discrepancies are reduced when using planar gradiometers rather than axial gradiometers. We, therefore, recommend using lagged interaction measures on planar gradiometer data when investigating network properties of resting-state sensor-level MEG data.
Information across different senses can affect our behavior in both positive and negative ways. Stimuli aligned with a target stimulus can lead to improved behavioral performances, while competing, transient stimuli often negatively affect our task performance. But what about subtle changes in task-irrelevant multisensory stimuli? Within this experiment we tested the effect of the alignment of subtle auditory and visual distractor stimuli on the performance of detection and discrimination tasks respectively. Participants performed either a detection or a discrimination task on a centrally presented Gabor patch, while being simultaneously subjected to a random dot kinematogram, which alternated its color from green to red with a frequency of 7.5 Hz and a continuous tone, which was either a frequency modulated pure tone for the audiovisual congruent and incongruent conditions or white noise for the visual control condition. While the modulation frequency of the pure tone initially differed from the modulation frequency of the random dot kinematogram, the modulation frequencies of both stimuli could align after a variable delay, and we measured accuracy and reaction times around the possible alignment time. We found increases in accuracy for the audiovisual congruent condition suggesting subtle alignments of multisensory background stimuli can increase performance on the current task.
Predicting an upcoming event in time or modality leads to expectancy-based benefits for the predicted time point or modality and to costs for the unpredicted ones. Yet, it is not clear how expectancy to an event in time interacts with expectancy of a particular modality. In our study, participants had to perform a discrimination task (single or double pulse) on visual or tactile targets. An auditory cue (75% of validity) was used to increase expectancy in one or the other modality. We also manipulated the time point (1 or 3 s after the cue) in which the stimulus appeared in a probabilistic way. We found that increasing the temporal expectation reduced the reaction time of valid targets in both, vision and touch, but when the target was presented in the unexpected modality (invalid cue) time predictability induced a cost in reaction time. Therefore the benefit of time expectancy is converted into a cost if the stimulus is presented in the unattended modality, suggesting cross-modal competition in the temporal domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.