Feature and conjunction searches are widely used to study attentional deployment. However, the spatiotemporal behavior of attention integration in these tasks remains under debate. Are multiple search stimuli processed in parallel or sequentially? Does sampling of visual information and attentional deployment differ between these two types of search? If so, how? We used an innovative methodology to estimate the distribution of attention on a single-trial basis for feature and conjunction searches. Observers performed feature- and conjunction-search tasks. They had to detect and discriminate a tilted low-spatial-frequency grating among three low-spatial-frequency vertical gratings (feature search) or low-spatial-frequency vertical gratings and high-spatial-frequency tilted gratings (conjunction search). After a variable delay, two probes were flashed at random locations. Performance in reporting the probes was used to infer attentional deployment to those locations. By solving a second-degree equation, we determined the probability of probe report at the most (P1) and least (P2) attended locations on a given trial. Were P1 and P2 equal, we would conclude that attention had been uniformly distributed across all four locations. Otherwise, we would conclude that visual information sampling and attentional deployment had been nonuniformly distributed. Our results show that processing was nonuniformly distributed across the four locations in both searches, and was modulated periodically over time at ∼5 Hz for the conjunction search and ∼12 Hz for the feature search. We argue that the former corresponds to the periodicity of attentional deployment during the search, whereas the latter corresponds to ongoing sampling of visual information. Because different locations were not simultaneously processed, this study rules out a strict parallel model for both search types.
Human (b) Baselines (c) Ours (RaLSGAN+Pix2Pix) (d) Ours (StyleGAN2+Pix2Pix) Figure 1. Chinese landscape paintings created by (a) human artists, (b) baseline models (top painting from RaLSGAN [9], bottom painting from StyleGAN2 [13]), and two GANs, (c) and (d), within our proposed Sketch-And-Paint framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.