In this paper, we report that when the low-level features of targets and distractors are held constant, visual search performance can be strongly influenced by familiarity. In the first condition, a ILl was the target amid ins as distractors, and vice versa. The response time increased steeply as a function of number of distractors (82 msec/item). When the same stimuli were rotated by 90°(the second condition), however, they became familiar pattems-2 and S-and gave rise to much shallower search functions (31 msec/item). In the third condition, when the search was for a familiar target, N (or Z), among unfamiliar distractors, II1s (or S:s), the slope was about 46 msec/item. In the last condition, when the search was for an unfamiliar target, 111 (or S:), among familiar distractors, Ns (or Zs), parallel search functions were found with a slope of about 1.5msec/item. These results show that familiarity speeds visual search and that it does so principally when the distractors, not the targets, are familiar.
In two studies, observers searched for a single oblique target in a field of vertical distractors. In one experiment, target detection and identification (left versus right tilt) were compared. In another experiment, detection and localization were compared for the left versus the right half of the display. Performance on all three tasks was virtually identical: if a target could be detected, it could also be identified and localized. A review of previous studies generally supports the conclusion that performance on the three tasks is similar. This argues against current search theories, which rest heavily on data showing differences in identification and localization.
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.Two factors make early vision a difficult computational problem. First, the solution space is combinatorially explosive because images contain a large number of feature dimensions. Second, computation must be rapid, so processing time is limited. Many theories suggest that the visual system solves these problems by means of a divideand-eonquer strategy; the retinal image is decomposed into an array of separate representations that are processed in parallel.
A series of three experiments examined temporal aliasing in stereoscopic displays. The first experiment compared aliasing in frontal plane motion of different disparities, while the second compared aliasing for motion in different depth directions. The results showed little effect of viewing conditions on perceived aliasing. The third experiment tested whether there was a binocular motion mechanism which integrated temporal sampling in the two eyes. The results were consistent with the first two studies in suggesting that aliasing is generated only by monocular motion signals. The data have both practical and theoretical implications: 1) motion produced by means of LCD glasses will require double the sampling rate needed for motion created by anaglyph methods and 2) the shortrange motion system is monocular. TEMPORAL ALIASINGDigital displays present spatially and temporally sampled versions of continuous images. Sampling, however, can create "aliasing" artifacts which result in unwanted image distortions and reduced fidelity. Because aliasing is such an important problem in image quality, there have been many studies examining the effects of spatial (e. g. , Nyman and Laurinen, 1982;Nyman and Laurinen, 1985) and temporal (e.g. , Watson et al. , 1986; Green 1992a) sampling in video displays. This is an important research topic because knowledge visual mechanisms which interpolate sampled signals may suggest new methods of image compression.The advent of stereoscopic displays presents a new opportunity to improve the realism -of video displays but also opens a new set of questions about sampling and aliasing. There has been some research on spatial sampling (e. g. , Tzelgov et al. , 1990) in stereoscopic displays but no studies on temporal sampling requirements.There are two possible ways that stereo displays might alter temporal sampling requirements. First, viewers usually verge on the plane of the image, so nonstereo displays present information to corresponding points on the two retinae. In stereo displays, however, the information may fall on noncorresponding points. This activates cortical disparity detectors in which are not involved in nonstereo viewing. There is clear psychophysical evidence that humans possess stereomotion detectors with very different properties from mechanisms for frontal plane motion (Regan, 1991).Moreover, there is other evidence (e. g., Tyler, 1971) that frontal plane and stereo motion mechanisms interact. Second, stereoscopic displays afford different ways to distribute the temporal sampling between the two eyes. In nonstereo displays, each eye receives images sampled at the same O-8194-0823-9/92/$4.OO SPIE Vol. 1669 Stereoscopic Displays andApplications III (1992) / 101 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/16/2016 Terms of Use: http://spiedigitallibrary.org/ss/TermsOfUse.aspx
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.