Nonaccidental properties (NAPs) are image properties that are invariant over orientation in depth and allow facile recognition of objects at varied orientations. NAPs are distinguished from metric properties (MPs) that generally vary continuously with changes in orientation in depth. While a number of studies have demonstrated greater sensitivity to NAPs in human adults, pigeons, and macaque IT cells, the few studies that investigated sensitivities in preschool children did not find significantly greater sensitivity to NAPs. However, these studies did not provide a principled measure of the physical image differences for the MP and NAP variations. We assessed sensitivity to NAP vs. MP differences in a nonmatch-to-sample task in which 14 preschool children were instructed to choose which of two shapes was different from a sample shape in a triangular display. Importantly, we scaled the shape differences so that MP and NAP differences were roughly equal (although the MP differences were slightly larger), using the Gabor-Jet model of V1 similarity (Lades & et al., 1993). Mean reaction times (RTs) for every child were shorter when the target shape differed from the sample in a NAP than an MP. The results suggest that preschoolers, like adults, are more sensitive to NAPs, which could explain their ability to rapidly learn new objects, even without observing them from every possible orientation.
It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to make sound inferences about the contributions of later stages to neural activity or psychophysical performance. A Web-based app is described that is based on the Malsburg Gabor-jet model (Lades et al., 1993), which allows easy specification of the V1 similarity of pairs of stimuli, no matter how intricate. The model predicts the psycho physical discriminability of metrically varying faces and complex blobs almost perfectly (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012), and serves as the input stage of a large family of contemporary neurocomputational models of vision.Keywords 2-D shape and form . Similarity . Face perception Consider the problem of determining whether people or monkeys are more sensitive to differences in nonaccidental properties (NAPs)-whether a contour is straight or curved, for example-than to differences in metric properties (MPs)-such as differences in degrees of curvature. If we assume that the sensitivity to differences in NAPs arises at a stage in the ventral pathway later than V1, how can the physical properties of the stimuli be selected in a principled manner, so that the comparisons are not confounded with differences in V1 activation? The same methodological problem arises if an investigator wishes to determine whether observers are more sensitive to differences in facial expression than to differences in identity (or sex, or orientation in depth, etc.). This problem arises not only in psychophysical scaling of stimuli, but also with studies designed to more directly reflect the underlying neural correlates, such as fMRI fastadaptation designs and single-unit recordings. It can be argued that this problem of the scaling of shape similarity had been a major reason why, despite shape being the major input into visual cognition, the rigorous study of shape perception had clearly lagged the study of other perceptual attributes, such as color, motion, or stereo. The value of an intuitive implementation of the Gabor-jet modelDespite the utility in employing such a scaling system, the Gaborjet model is somewhat mathematically dense and cumbersome to explain to the uninitiated, thus diminishing its accessibility. Here, we introduce a Web-based applet designed to provide an engaging, graphically oriented guided tour of the model. The applet allows users to upload their own images, observe the transformations and computations made by the algorithm, customize the visualization of different processes, and retrieve a ranking of dissimilarity values for pairs of images. S...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.