Behavioral studies have shown that picture-plane inversion impacts face and object recognition differently, thereby suggesting face-specific processing mechanisms in the human brain. Here we used event-related potentials to investigate the time course of this behavioral inversion effect in both faces and novel objects. ERPs were recorded for 14 subjects presented with upright and inverted visual categories, including human faces and novel objects (Greebles). A N170 was obtained for all categories of stimuli, including Greebles. However, only inverted faces delayed and enhanced N170 (bilaterally). These observations indicate that the N170 is not specific to faces, as has been previously claimed. In addition, the amplitude difference between faces and objects does not reflect face-specific mechanisms since it can be smaller than between non-face object categories. There do exist some early differences in the time-course of categorization for faces and non-faces across inversion. This may be attributed either to stimulus category per se (e.g. face-specific mechanisms) or to differences in the level of expertise between these categories.
Behavioral studies indicate a right hemisphere advantage for processing a face as a whole and a left hemisphere superiority for processing based on face features. The present PET study identifies the anatomical localization of these effects in well-defined regions of the middle fusiform gyri of both hemispheres. The right middle fusiform gyrus, previously described as a face-specific region, was found to be more activated when matching whole faces than face parts whereas this pattern of activity was reversed in the left homologous region. These lateralized differences appeared to be specific to faces since control objects processed either as wholes or parts did not induce any change of activity within these regions. This double dissociation between two modes of face processing brings new evidence regarding the lateralized localization of face individualization mechanisms in the human brain.
Scalp event-related potentials (ERPs) in humans indicate that face and object processing differ approximately 170 ms following stimulus presentation, at the point of the N170 occipitotemporal component. The N170 is delayed and enhanced to inverted faces but not to inverted objects. We tested whether this inversion effect reflects early mechanisms exclusive to faces or whether it generalizes to other stimuli as a function of visual expertise. ERPs to upright and inverted faces and novel objects (Greebles) were recorded in 10 participants before and after 2 weeks of expertise training with Greebles. The N170 component was observed for both faces and Greebles. The results are consistent with previous reports in that the N170 was delayed and enhanced for inverted faces at recording sites in both hemispheres. For Greebles, the same effect of inversion was observed only for experts, primarily in the left hemisphere. These results suggest that the mechanisms underlying the electrophysiological face-inversion effect extend to visually homogeneous nonface object categories, at least in the left hemisphere, but only when such mechanisms are recruited by expertise.
Intermodal binding between affective information that is seen as well as heard triggers a mandatory process of audiovisual integration. In order to track the time course of this audiovisual binding, event related brain potentials were recorded while subjects saw facial expression and concurrently heard auditory fragment. The results suggest that the combination of the two inputs is early in time (110 ms post-stimulus) and translates as a specific enhancement in amplitude of the auditory NI component. These findings are compatible with previous functional neuroimaging results of audiovisual speech showing strong audiovisual interactions in auditory cortex in the form of magnetic response amplifications, as well as with electrophysiological studies demonstrating early audiovisual interactions (before 200 ms post-stimulus). Moreover, our results show that the informational content present in the two modalities plays a crucial role in triggering the intermodal binding process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.