This article addresses three issues in face processing: First, is face processing primarily accomplished by the right hemisphere, or do both left- and right-hemisphere mechanisms play important roles? Second, are the mechanisms the same as those involved in general visual processing, or are they dedicated to face processing? Third, how can the mechanisms be characterized more precisely in terms of processes such as visual parsing? We explored these issues using the divided visual field methodology in four experiments. Experiments 1 and 2 provided evidence that both left- and right-hemisphere mechanisms are involved in face processing. In Experiment 1, a right-hemisphere advantage was found for both Same and Different trials when Same faces were identical and Different faces differed on all three internal facial features. Experiment 2 replicated the right-hemisphere advantage for Same trials but showed a left-hemisphere advantage for Different trials when one of three facial features differed between the target and the probe faces. Experiment 3 showed that the right-hemisphere advantage obtained with upright faces in Experiment 2 disappeared when the faces were inverted. This result suggests that there are right-hemisphere mechanisms specialized for processing upright faces, although it could not be determined whether these mechanisms are completely face-specific. Experiment 3 also provided evidence that the left-hemisphere mechanisms utilized in face processing tasks are general-purpose visual mechanisms not restricted to particular classes of visual stimuli. In Experiment 4, a left-hemisphere advantage was obtained when the task was to find one facial feature that was the same between the target and the probe faces. We suggest that left-hemisphere advantages shown in face processing are due to the parsing and analysis of the local elements of a face.
Three general classes of algorithms have been proposed for figure/ground segregation. One class attempts to delineate figures by searching for edges, whereas another class attempts to grow homogeneous regions; the third class consists of hybrid algorithms, which combine both procedures in various ways. The experiment reported here demonstrated that humans use a hybrid algorithm that makes use of both kinds of processes simultaneously and interactively. This conclusion follows from the patterns of response times observed when humans tried to recognize degraded polygons. By blurring the edges, the edge-detection process was selectively impaired, and by imposing noise over the figure and background, the regiongrowing process was selectively impaired. By varying the amounts of both sorts of degradation independently, the interaction between the two processes was observed.One of the fundamental purposes of vision is to allow us to recognize objects. Recognition occurs when sensory input accesses the appropriate memory representations, which allows one to know more about the stimulus than is apparent in the immediate input (e.g., its name). Before visual input can be compared to previously stored information, the regions of the image likely to correspond to a figure must be segregated from those comprising the background. The initial input from the eyes is in many ways like a bit-map image in a computer, with only local properties being represented by the activity of individual cells; only after the input is organized into larger groups, which are likely to correspond to objects and parts thereof, can it be encoded into memory and compared to stored representations of shape. Thus, understanding of the processes that segregate figure from ground is of fundamental importance for understanding the nature of perception.Researchers in computer vision have been faced with the problems of segregating figure from ground, and in this report we explore whether the human brain uses some of the algorithms they have developed. In computer vision, the input is a large intensity array, with a number representing the intensity of light at each point in the display. Two broad classes of algorithms have been devised to organize this welter of input into regions likely to correspond to objects. One class contains edge-based algorithms (1-3). These algorithms look first for sharp changes in intensity (i.e., maxima in first derivatives or zero crossings in the second derivative of the function relating intensity to position), which are assumed to correspond to edges. In the Marr-Hildreth theory (3), these changes are observed at multiple scales of resolution and, if present at each, are taken to indicate edges (and not texture or the like). The local points of sharp change are connected, resulting in a depiction of edges that are assembled into the outlines of objects. The other class contains the so-called region-based algorithms (4-7). These algorithms construct regions by growing and splitting areas that are maximally homogeneo...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.