Eye movements introduce large artifacts to electroencephalographic recordings (EEG) and thus render data analysis difficult or even impossible. Trials contaminated by eye movement and blink artifacts have to be discarded, hence in standard EEG-paradigms subjects are required to fixate on the screen. To overcome this restriction, several correction methods including regression and blind source separation have been proposed. Yet, there is no automated standard procedure established. By simultaneously recording eye movements and 64-channel-EEG during a guided eye movement paradigm, we investigate and review the properties of eye movement artifacts, including corneo-retinal dipole changes, saccadic spike potentials and eyelid artifacts, and study their interrelations during different types of eye- and eyelid movements. In concordance with earlier studies our results confirm that these artifacts arise from different independent sources and that depending on electrode site, gaze direction, and choice of reference these sources contribute differently to the measured signal. We assess the respective implications for artifact correction methods and therefore compare the performance of two prominent approaches, namely linear regression and independent component analysis (ICA). We show and discuss that due to the independence of eye artifact sources, regression-based correction methods inevitably over- or under-correct individual artifact components, while ICA is in principle suited to address such mixtures of different types of artifacts. Finally, we propose an algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner. In the data presented here, the algorithm performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials. Moreover it performed more reliable and almost twice as effective than human experts when those had to base their decision on IC topographies only. Furthermore, a receiver operating characteristic (ROC) analysis demonstrated an optimal balance of false positive and false negative at an area under curve (AUC) of more than 0.99. Removing the automatically detected ICs from the data resulted in removal or substantial suppression of ocular artifacts including microsaccadic spike potentials, while the relevant neural signal remained unaffected. In conclusion the present work aims at a better understanding of individual eye movement artifacts, their interrelations and the respective implications for eye artifact correction. Additionally, the proposed ICA-procedure provides a tool for optimized detection and correction of eye movement-related artifact components.
Recent evidence suggests that visual attention alternately samples two behaviourally relevant objects at approximately 4 Hz, rhythmically shifting between the objects. Whether similar attentional rhythms exist in other sensory modalities, however, is not yet clear. We therefore adapted and extended an established paradigm to investigate visual and potential auditory attentional rhythms, as well as possible interactions, on both a behavioural (detection performance, N = 33) and a neural level (EEG, N = 18). The results during unimodal attention demonstrate that both visual‐ and auditory‐target detection fluctuate at frequencies of approximately 4–8 Hz, confirming that attentional rhythms are not specific to visual processing. The EEG recordings provided evidence of oscillatory activity that underlies these behavioural effects. At right and left occipital EEG electrodes, we detected counter‐phasic theta‐band activity (4–8 Hz), mirroring behavioural evidence of alternating sampling between the objects presented right and left of central fixation, respectively. Similarly, alpha‐band activity as a signature of relatively suppressed sensory encoding showed a theta‐rhythmic, counter‐phasic change in power. Moreover, these theta‐rhythmic changes in alpha power were predictive of behavioural performance in both sensory modalities. Overall, the present findings provide a new perspective on the multimodal rhythmicity of attention.
The ability to integrate visual features into a global coherent percept that can be further categorized and manipulated are fundamental abilities of the neural system. While the processing of visual information involves activation of early visual cortices, the recruitment of parietal and frontal cortices has been shown to be crucial for perceptual processes. Yet is it not clear how both cortical and long-range oscillatory activity leads to the integration of visual features into a coherent percept. Here, we will investigate perceptual grouping through the analysis of a contour categorization task, where the local elements that form contour must be linked into a coherent structure, which is then further processed and manipulated to perform the categorization task. The contour formation in our visual stimulus is a dynamic process where, for the first time, visual perception of contours is disentangled from the onset of visual stimulation or from motor preparation, cognitive processes that until now have been behaviorally attached to perceptual processes. Our main finding is that, while local and long-range synchronization at several frequencies seem to be an ongoing phenomena, categorization of a contour could only be predicted through local oscillatory activity within parietal/frontal sources, which in turn, would synchronize at gamma (>30 Hz) frequency. Simultaneously, fronto-parietal beta (13–30 Hz) phase locking forms a network spanning across neural sources that are not category specific. Both long range networks, i.e., the gamma network that is category specific, and the beta network that is not category specific, are functionally distinct but spatially overlapping. Altogether, we show that a critical mechanism underlying contour categorization involves oscillatory activity within parietal/frontal cortices, as well as its synchronization across distal cortical sites.
Cross-modal interactions between sensory channels have been shown to depend on both the spatial disparity and the perceptual similarity between the presented stimuli. Here we investigate the behavioral and neural integration of auditory and tactile stimulus pairs at different levels of spatial disparity. Additionally, we modulated the amplitudes of both stimuli in either a coherent or non-coherent manner. We found that both auditory and tactile localization performance was biased towards the stimulus in the respective other modality. This bias linearly increases with stimulus disparity and is more pronounced for coherently modulated stimulus pairs. Analyses of electroencephalographic (EEG) activity at temporal–cortical sources revealed enhanced event-related potentials (ERPs) as well as decreased alpha and beta power during bimodal as compared to unimodal stimulation. However, while the observed ERP differences are similar for all stimulus combinations, the extent of oscillatory desynchronization varies with stimulus disparity. Moreover, when both stimuli were subjectively perceived as originating from the same direction, the reduction in alpha and beta power was significantly stronger. These observations suggest that in the EEG the level of perceptual integration is mainly reflected by changes in ongoing oscillatory activity.
When one is listening, familiarity with an attended talker's voice improves speech comprehension. Here, we instead investigated the effect of familiarity with a distracting talker. In an irrelevant-speech task, we assessed listeners' working memory for the serial order of spoken digits when a task-irrelevant, distracting sentence was produced by either a familiar or an unfamiliar talker (with rare omissions of the task-irrelevant sentence). We tested two groups of listeners using the same experimental procedure. The first group were undergraduate psychology students (N = 66) who had attended an introductory statistics course. Critically, each student had been taught by one of two course instructors, whose voices served as the familiar and unfamiliar task-irrelevant talkers. The second group of listeners were family members and friends (N = 20) who had known either one of the two talkers for more than 10 years. Students, but not family members and friends, made more errors when the task-irrelevant talker was familiar versus unfamiliar. Interestingly, the effect of talker familiarity was not modulated by the presence of task-irrelevant speech: Students experienced stronger working memory disruption by a familiar talker, irrespective of whether they heard a taskirrelevant sentence during memory retention or merely expected it. While previous work has shown that familiarity with an attended talker benefits speech comprehension, our findings indicate that familiarity with an ignored talker disrupts working memory for target speech. The absence of this effect in family members and friends suggests that the degree of familiarity modulates the memory disruption.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.