Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces.
The classic animation experiment by Heider and Simmel (1944) revealed that humans have a strong tendency to impose narrative even on displays showing interactions between simple geometric shapes. In their most famous animation with three simple shapes, observers almost inevitably interpreted them as rational agents with intentions, desires, and beliefs (“That nasty big triangle!”). Much work on dynamic scenes has identified basic visual properties that can make shapes seem animate. Here, we investigate the limits on the ability to use narrative to share information about animated scenes. We created 30 second Heider-style cartoons with 3–9 items. Item trajectories were generated automatically by a simple set of rules, but without a script. In Experiments 1 and 2, 10 observers wrote short narratives for each cartoon. Next, new observers were shown a cartoon and then presented with a narrative generated for that specific cartoon or one generated for a different cartoon having the same items. Observers rated the fit of the narrative to the cartoon on a scale from 1 (clearly does not fit) to 5 (clearly fits). Performance declined markedly when the number of items was larger than 3. Experiment 3 had observers determine if a short clip of a cartoon came from a longer clip. Experiment 4 had observers determine which of two narratives fit a cartoon. Finally, in Experiment 5, narratives always mentioned every item in a display. In all cases of matching narrative to cartoon, performance drops most dramatically between 3 and 4 items.
It is currently unknown whether changes to the oculomotor system can induce changes to the distribution of spatial attention around a fixated target. Previous studies have used perceptual performance tasks to show that adaptation of saccadic eye movements affects dynamic properties of visual attention, in particular, attentional shifts to a cued location. In this study, we examined the effects of saccadic adaptation on the static distribution of visual attention around fixation (attentional field). We used the classic double step adaptation procedure and a flanker task to test for differences in the attentional field after forward and backward adaptation. Reaction time (RT) measures revealed that the shape of the attentional field changed significantly after backward adaptation as shown through altered interference from distracters at different eccentricities but not after forward adaptation. This finding reveals that modification of saccadic amplitudes can affect metrics of not only dynamic properties of attention but also its static properties. A major implication is that the neural mechanisms underlying fundamental selection mechanisms and the oculomotor system can reweight each other.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.