A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
To accomplish the deceptively simple task of perceiving the size of objects in the visual scene, the visual system combines information about the retinal size of the object with several other cues, including perceived distance, relative size, and prior knowledge. When local component elements are perceptually grouped to form objects, the task is further complicated because a grouped object does not have a continuous contour from which retinal size can be estimated. Here, we investigate how the visual system solves this problem and makes it possible for observers to judge the size of perceptually grouped objects. We systematically vary the shape and orientation of the component elements in a two-alternative forced-choice task and find that the perceived size of the array of component objects can be almost perfectly predicted from the distance between the centroids of the component elements and the center of the array. This is true whether the global contour forms a circle or a square. When elements were positioned such that the centroids along the global contour were at different distances from the center, perceived size was based on the average distance. These results indicate that perceived size does not depend on the size of individual elements, and that smooth contours formed by the outer edges of the component elements are not used to estimate size. The current study adds to a growing literature highlighting the importance of centroids in visual perception and may have implications for how size is estimated for ensembles of different objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.