Every day we perceive pictures on our mobile phones and scroll through images within a limited space. At present, however, visual perception via image scrolling is not well understood. This study investigated the nature of visual perception within a small window frame. It compared visual search efficiency using three modes: scrolling, moving-window, and free-viewing. The item number and stimulus size varied. Results showed variations in search efficiency depending on search mode. The slowest search occurred under the scrolling condition, followed by the moving-window condition, and the fastest search occurred under the no-window condition. For the scrolling condition, the response time increased the least sharply in proportion to item number but most sharply in proportion to the stimulus size compared to the other two conditions. Analysis of the trace of scan revealed frequent pauses interjected with small and fast stimulus shifts for the scrolling condition, but slow and continuous window movements interjected with a few pauses for the moving-window condition. We concluded that searching via scrolling was less efficient than searching via a moving-window, reflecting differences in dynamic properties of participants’ scan.
Previous studies have demonstrated that when target and nontarget objects are similar, comparing the representation of the currently viewed object and the target representation in memory requires more attentional resources, thus resulting in less efficient discrimination processes. An important factor determining this effect is how many features are shared between the target and nontarget objects. The present study examined whether the effect of the target‐nontarget similarity on attentional processes depends mainly on the individual feature representations or on the feature‐integrated representation. A visual three‐category oddball task that required the detection of a target defined by a combination of color, shape, and motion was conducted. Results showed an increase in amplitudes of the earlier late positive complex (LPC) for the deviant stimuli when they shared the same color as the target. This increase lasted until the later LPC, when they also shared the same shape. Findings suggest that the target‐nontarget similarity in each feature dimension determined the amount of required attentional resources on later attentional processes in a serial and hierarchical manner.
In the human visual system, different attributes of an object are processed separately and are thought to be then temporarily bound by attention into an integrated representation to produce a specific response. However, if such representations existed in the brain for arbitrary multi-attribute objects, a combinatorial explosion problem would be unavoidable. Here, we show that attention may bind features of different attributes only in pairs and that bound feature pairs, rather than integrated object representations, are associated with responses for unfamiliar objects. We found that in a mapping task from three-attribute stimuli to responses, presenting three attributes in pairs (two attributes in each window) did not significantly complicate feature integration and response selection when the stimuli were not very familiar. We also found that repeated presentation of the same triple conjunctions significantly improved performance on the stimulus-response task when the correct responses were determined by the combination of three attributes, but this familiarity effect was not observed when the response could be determined by two attributes. These findings indicate that integration of three or more attributes is a distinct process from that of two, requiring long-term learning or some serial process. This suggests that integrated object representations are not formed or are formed only for a limited number of very familiar objects, which resolves the computational difficulty of the binding problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.