We carried out 2 functional magnetic resonance imaging experiments to investigate the cortical mechanisms underlying the contribution of form and surface properties to object recognition. In experiment 1, participants performed same-different judgments in separate blocks of trials on pairs of unfamiliar "nonsense" objects on the basis of their form, surface properties (i.e., both color and texture), or orientation. Attention to form activated the lateral occipital (LO) area, whereas attention to surface properties activated the collateral sulcus (CoS) and the inferior occipital gyrus (IOG). In experiment 2, participants were required to make same-different judgments on the basis of texture, color, or form. Again attention to form activated area LO, whereas attention to texture activated regions in the IOG and the CoS, as well as regions in the lingual sulcus and the inferior temporal sulcus. Within these last 4 regions, activation associated with texture was higher than activation associated with color. No color-specific cortical areas were identified in these regions, although parts of V1 and the cuneus yielded higher activation for color as opposed to texture. These results suggest that there are separate form and surface-property pathways in extrastriate cortex. The extraction of information about an object's color seems to occur relatively early in visual analysis as compared with the extraction of surface texture, perhaps because the latter requires more complex computations.
The visual properties of an object provide many cues as to the tensile strength, compliance, and density of the material from which it is made. However, it is not well understood how these implicit associations affect our perceptions of these properties and how they determine the initial forces that are applied when an object is picked up. Here we examine the effects of these cues on such forces by using the classic "material-weight illusion" (MWI). Grip and load forces were measured in three experiments as participants lifted cubes made from metal, wood, and expanded polystyrene. These cubes were adjusted to have a different mass than would be expected for a particular material. For the initial lifts, the forces were scaled to the expected weight of each object, such that the metal block was gripped and lifted with more force than the polystyrene one. After a few lifts, however, participants scaled their forces to the actual weight of the blocks, implicitly disregarding the misleading visual cues to each block's composition (experiments 1 and 2). Despite this rapid rescaling, participants experienced a robust MWI throughout the duration of the experiments. In fact, the grip and load forces never matched the perception of weight until the differences in the visual surface properties between the blocks were removed (experiment 3). These findings are discussed in relation to recent debates about the underlying causes of weight-based illusions and the effect of top-down visual cues on perception and action.
Our visual system can extract summary statistics from large collections of similar objects without forming detailed representations of the individual objects in the ensemble. Such object ensemble representation is adaptive and allows us to overcome the capacity limitation associated with representing specific objects. Surprisingly, little is known about the neural mechanisms supporting such object ensemble representation. Here we showed human observers identical photographs of the same object ensemble, different photographs depicting the same ensemble, or different photographs depicting different ensembles. We observed fMRI adaptation in anterior-medial ventral visual cortex whenever object ensemble statistics repeated, even when local image features differed across photographs. Interestingly, such object ensemble processing is closely related to texture and scene processing in the brain. In contrast, the lateral occipital area, a region involved in object-shape processing, showed adaptation only when identical photographs were repeated. These results provide the first step toward understanding the neural underpinnings of real-world object ensemble representation.
Skilled manipulation requires the ability to predict the weights of viewed objects based on learned associations linking object weight to object visual appearance. However, the neural mechanisms involved in extracting weight information from viewed object properties are unknown. Given that ventral visual pathway areas represent a wide variety of object features, one intriguing but as yet untested possibility is that these areas also represent object weight, a nonvisual motor-relevant object property. Here, using event-related fMRI and pattern classification techniques, we tested the novel hypothesis that object-sensitive regions in occipitotemporal cortex (OTC), in addition to traditional motor-related brain areas, represent object weight when preparing to lift that object. In two studies, the same participants prepared and then executed lifting actions with objects of varying weight. In the first study, we show that when lifting visually identical objects, where predicted weight is based solely on sensorimotor memory, weight is represented in object-sensitive OTC. In the second study, we show that when object weight is associated with a particular surface texture, that texture-sensitive OTC areas also come to represent object weight. Notably, these texture-sensitive areas failed to carry information about weight in the first study, when object surface properties did not specify weight. Our results indicate that the integration of visual and motor-relevant object information occurs at the level of single OTC areas and provide evidence that the ventral visual pathway is actively and flexibly engaged in processing object weight, an object property critical for action planning and control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.