Living in a constantly changing world, we continuously process sensory input while navigating through complex and structured environments. Through statistical learning, our brains efficiently adapt to regularities in the visual environment, biasing our visual attentional selection. This learning process is evident in everyday tasks like web browsing, where we learn to inhibit regions where distracting advertisements are likely to appear, enabling us to focus on task-relevant stimuli. The primary objective of this thesis is to discern whether learned spatial suppression occurs proactively before attentional engagement or reactively following initial attentional selection.
In Chapter 2, we tracked oculomotor responses and found that learning-dependent suppression impacts the earliest saccades following the presentation of the search display. This suggests that suppression is already active upon the display's appearance, implying a proactive process.
From Chapter 3 to Chapter 5, we employed a search-probe paradigm using a probe-offset detection task to investigate whether suppression occurred before the onset of the search display. Our findings revealed that when the attentional system was primed for the impending visual search, spatial attention was influenced by statistical learning, resulting in suppression of high-probability distractor locations and enhancement of high-probability target locations. In Chapter 5, manipulating the probe display onset confirmed that suppression operates before the search display onset.
Initially, these findings supported the notion of proactive suppression. However, Chapter 6 provided a different perspective using computational modeling techniques on EEG data to reconstruct neuronal tuning profiles of spatial attention. Our analysis revealed that suppression, while present before the search display, was likely triggered by the preceding placeholder display. This suggests a reactive suppression mechanism, where suppression occurs after initial attentional enhancement.
This raises the question: why is learned spatial suppression triggered by the neutral and task-irrelevant placeholder display? One plausible explanation is the brain’s encoding of a plastic spatial priority map shaped by statistical learning. Recent theoretical advancements in working memory propose that memorized representations are governed by activity-silent synaptic neural networks, which can be activated by high-contrast visual impulses. If the learning-related spatial map is stored at the synaptic level, it could be reactivated by sensory input from the placeholder display. This hypothesis is supported by Duncan et al. (2023), who demonstrated reliable decoding of high-probability target locations in EEG signals following a high-contrast visual impulse.
While suppression is in place at the search display's onset, it remains unclear whether this requires a premask display or if true proactive suppression can exist without reactive mechanisms. The findings do not preclude the possibility of proactive suppression. Indeed, Chapter 2 demonstrated spatial suppression from the outset of the search display without any preceding placeholder, supporting the proactive suppression account. It is plausible that both reactive and proactive mechanisms govern learned spatial suppression during visual search tasks, with reactive suppression being more energy-efficient and proactive suppression being less common but more cognitively demanding.