The binding of features into perceptual wholes is a well-established phenomenon, which has previously only been studied in the context of early vision and low-level features, such as colour or proximity. We hypothesised that a similar binding process, based on higher level information, could bind people into interacting groups, facilitating faster processing and enhanced memory of social situations. To investigate this possibility we used three experimental approaches to explore grouping effects in displays involving interacting people. First, using a visual search task we demonstrate more rapid processing for interacting (versus non-interacting) pairs in an odd-quadrant paradigm (Experiments 1a & 1b). Second, using a spatial judgment task, we show that interacting individuals are remembered as physically closer than are non-interacting individuals (Experiments 2a & 2b). Finally, we show that memory retention of grouprelevant and irrelevant features is enhanced when recalling interacting partners in a surprise memory task (Experiments 3a & 3b). Each of these results is consistent with the social binding hypothesis, and alternative explanations based on low level perceptual features and attentional effects are ruled out. We conclude that automatic mid-level grouping processes bind individuals into groups on the basis of their perceived interaction. Such social binding could provide the basis for more sophisticated social processing. Identifying the automatic encoding of social interactions in visual search, distortions of spatial working memory, and facilitated retrieval of object properties from longer-term memory, opens new approaches to studying social cognition with possible practical applications.
When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Two rival explanations have been advanced to explain this search advantage for facing dyads. According to one account, the search advantage reflects the fact that front-to-front targets engage domain-specific social interaction processing that helps stimuli compete more effectively for limited attentional resources. Another view is that the effect is a by-product of the ability of individual heads and bodies to direct observers' visuospatial attention. Here, we describe a two-part investigation that sought to test these accounts. First, we found that it is possible to replicate the search advantage with non-social objects. Next, we employed a cueing paradigm to investigate whether it is the ability of individual items to direct observers' visuospatial attention that determines if an object category produces the search advantage for facing dyads. We found that the strength of the cueing effect produced by an object category correlated closely with the strength of the search advantage produced by that object category. Taken together, these results provide strong support for the directional cueing account.
The Twenty Item Prosopagnosia Index (PI20) is a self-report questionnaire used for quantifying prosopagnosic traits. This scale is intended to help researchers identify cases of developmental prosopagnosia by providing standardized self-report evidence to complement diagnostic evidence obtained from objective computer-based tasks. In order to respond appropriately to items, prosopagnosics must have some insight that their face recognition is well below average, while non-prosopagnosics need to understand that their relative face recognition ability falls within the typical range. There has been considerable debate about whether participants have the necessary insight into their face recognition abilities to respond appropriately. In the present study, we sought to determine whether the PI20 provides meaningful evidence of face recognition impairment. In keeping with the intended use of the instrument, we used PI20 scores to identify two groups: high-PI20 scorers (those with self-reported face recognition difficulties) and low-PI20 scorers (those with no self-reported face recognition difficulties). We found that participant groups distinguished on the basis of PI20 scores clearly differed in terms of their mean performance on objective measures of face recognition ability. We also found that high-PI20 scorers were more likely to achieve levels of face recognition accuracy associated with developmental prosopagnosia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.