There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density.psychophysics | vision | texture | numerical cognition
In the social sciences it is common practice to test specific theoretically motivated research hypotheses using formal statistical procedures. Typically, students in these disciplines are trained in such methods starting at an early stage in their academic tenure. On the other hand, in psychophysical research, where parameter estimates are generally obtained using a maximum-likelihood (ML) criterion and data do not lend themselves well to the least-squares methods taught in introductory courses, it is relatively uncommon to see formal model comparisons performed. Rather, it is common practice to estimate the parameters of interest (e.g., detection thresholds) and their standard errors individually across the different experimental conditions and to ‘eyeball’ whether the observed pattern of parameter estimates supports or contradicts some proposed hypothesis. We believe that this is at least in part due to a lack of training in the proper methodology as well as a lack of available software to perform such model comparisons when ML estimators are used. We introduce here a relatively new toolbox of Matlab routines called Palamedes which allows users to perform sophisticated model comparisons. In Palamedes, we implement the model-comparison approach to hypothesis testing. This approach allows researchers considerable flexibility in targeting specific research hypotheses. We discuss in a non-technical manner how this method can be used to perform statistical model comparisons when ML estimators are used. With Palamedes we hope to make sophisticated statistical model comparisons available to researchers who may not have the statistical background or the programming skills to perform such model comparisons from scratch. Note that while Palamedes is specifically geared toward psychophysical data, the core ideas behind the model-comparison approach that our paper discusses generalize to any field in which statistical hypotheses are tested.
We present an algorithm for separating the shading and reflectance images of photographed natural scenes. The algorithm exploits the constraint that in natural scenes chromatic and luminance variations that are co-aligned mainly arise from changes in surface reflectance, whereas near-pure luminance variations mainly arise from shading and shadows. The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red-green, and blue-yellow channels of the primate visual system. The red-green and blue-yellow image planes are analysed to provide a map of the changes in surface reflectance, which is then used to separate the reflectance from shading changes in both the luminance and chromatic image planes. The final reflectance image is obtained by reconstructing the chromatic and luminance-reflectance-change maps, while the shading image is obtained by subtracting the reconstructed luminance-reflectance image from the original luminance image. A number of image examples are included to illustrate the successes and limitations of the algorithm.
The past quarter century has witnessed considerable advances in our understanding of Lightness (perceived reflectance), Brightness (perceived luminance) and perceived Transparency (LBT). This review poses eight major conceptual questions that have engaged researchers during this period, and considers to what extent they have been answered. The questions concern 1. the relationship between lightness, brightness and perceived non-uniform illumination, 2. the brain site for lightness and brightness perception, 3 the effects of context on lightness and brightness, 4. the relationship between brightness and contrast for simple patch-background stimuli, 5. brightness "filling-in", 6. lightness anchoring, 7. the conditions for perceptual transparency, and 8. the perceptual representation of transparency. The discussion of progress on major conceptual questions inevitably requires an evaluation of which approaches to LBT are likely and which are unlikely to bear fruit in the long term, and which issues remain unresolved. It is concluded that the most promising developments in LBT are (a) models of brightness coding based on multi-scale filtering combined with contrast normalization, (b) the idea that the visual system decomposes the image into "layers" of reflectance, illumination and transparency, (c) that an understanding of image statistics is important to an understanding of lightness errors, (d) Whittle's logW metric for contrast-brightness, (e) the idea that "filling-in" is mediated by low spatial frequencies rather than neural spreading, and (f) that there exist multiple cues for identifying non-uniform illumination and transparency. Unresolved issues include how relative lightness values are anchored to produce absolute lightness values, and the perceptual representation of transparency. Bridging the gap between multi-scale filtering and layer decomposition approaches to LBT is a major task for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.