Vision combines local feature integration with active viewing processes, such as eye movements, to perceive complex visual scenes. However, it is still unclear how these processes interact and support each other. Here, we investigated how the dynamics of saccadic eye movements interact with contour integration, focusing on situations in which contours are difficult to find or even absent. We recorded observers' eye movements while they searched for a contour embedded in a background of randomly oriented elements. Task difficulty was manipulated by varying the contour's path angle. An association field model of contour integration was employed to predict potential saccade targets by identifying stimulus locations with high contour salience. We found that the number and duration of fixations increased with the increasing path angle of the contour. In addition, fixation duration increased over the course of a trial, and the time course of saccade amplitude depended on the percept of observers. Model fitting revealed that saccades fully compensate for the reduced saliency of peripheral contour targets. Importantly, our model predicted fixation locations to a considerable degree, indicating that observers fixated collinear elements. These results show that contour integration actively guides eye movements and determines their spatial and temporal parameters.
While viewing a scene, the eyes are attracted to salient stimuli. We set out to identify the brain signals controlling this process. In a contour integration task, in which participants searched for a collinear contour in a field of randomly oriented Gabor elements, a previously established model was applied to calculate a visual saliency value for each fixation location. We studied brain activity related to the modeled saliency values, using coregistered eye tracking and EEG. To disentangle EEG signals reflecting salience in free viewing from overlapping EEG responses to sequential eye movements, we adopted generalized additive mixed modeling (GAMM) to single epochs of saccade-related EEG. We found that, when saliency at the next fixation location was high, amplitude of the presaccadic EEG activity was low. Since presaccadic activity reflects covert attention to the saccade target, our results indicate that larger attentional effort is needed for selecting less salient saccade targets than more salient ones. This effect was prominent in contour-present conditions (half of the trials), but ambiguous in the contour-absent condition. Presaccadic EEG activity may thus be indicative of bottom-up factors in saccade guidance. The results underscore the utility of GAMM for EEG-eye movement coregistration research.
Dehaene, Changeux, Naccache, Sackur, and Sergent (2006) and Koch and Tsuchiya (2007) recently proposed taxonomies that distinguish between four processing states, based on bottom-up stimulus strength and top-down attentional amplification. The aim of the present study was to empirically test these processing states using the priming paradigm. Our results showed that attention (prime attended or not) and stimulus strength (prime presented subliminally or not) significantly modulated priming effects: either receiving top-down attention or possessing sufficient bottom-up strength was a prerequisite for a stimulus to elicit priming. When both top-down attention and sufficient bottom-up strength were present, the priming effect was boosted. The origins of the observed priming effects also varied between different processing states. We can conclude that our empirical study using the priming paradigm confirmed the presence of four processing states, which displayed a differential pattern of response priming effects and differential origins of the response priming effects.
Two stimuli alternately presented at different locations can evoke a percept of a stimulus continuously moving between the two locations. The neural mechanism underlying this apparent motion (AM) is thought to be increased activation of primary visual cortex (V1) neurons tuned to locations along the AM path, although evidence remains inconclusive. AM masking, which refers to the reduced detectability of stimuli along the AM path, has been taken as evidence for AM-related V1 activation. AM-induced neural responses are thought to interfere with responses to physical stimuli along the path and as such impair the perception of these stimuli. However, AM masking can also be explained by predictive coding models, predicting that responses to stimuli presented on the AM path are suppressed when they match the spatio-temporal prediction of a stimulus moving along the path. In the present study, we find that AM has a distinct effect on the detection of target gratings, limiting the maximum performance at high contrast levels. This masking is strongest when the target orientation is identical to the orientation of the inducers. We developed a V1-like population code model of early visual processing, based on a standard contrast normalization model. We find that AM-related activation in early visual cortex is too small to either cause masking or to be perceived as motion. Our model instead predicts strong suppression of early sensory responses during AM, consistent with the theoretical framework of predictive coding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.