Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Vergence (the angular rotation of the eyes) is thought to provide essential distance information for size constancy (perceiving an object as having a constant physical size). Evidence for this comes from the fact that a target with a constant retinal size appears to shrink as the rotation of the eyes increases (indicating that the target has reduced in distance). This reduction in perceived size is supposed to maintain a constant perception of physical size in natural viewing conditions by cancelling out the increasing size of the retinal image as an object moves closer. Whilst this hypothesis has been extensively tested over the last 200 years, it has always been tested in the presence of confounding cues such as a changing retinal image or cognitive cues to distance.Testing members of the public with normal vision, we control for these confounding cues and find no evidence of vergence size constancy. Statement of RelevanceThis work has important implications for the neural basis of size constancy and for multisensory integration. First, leading work on the neural basis of size constancy cannot differentiate between recurrent processes in V1 (based on triangulation cues, such as vergence) and top-down processing (based on pictorial cues). Since our work challenges the existence of vergence size constancy, and therefore much of the basis of the recurrent processing account, our work indicates that top-down processing is likely to have a much more important role in size constancy than previously thought.Second, vergence size constancy is thought to be largely responsible for the apparent integration of the retinal image with proprioceptive distance information from the hand in the Taylor illusion (an afterimage of the hand viewed in darkness appears to shrink as the observer's hand is moved closer). This explanation for the Taylor illusion is challenged, and a cognitive account proposed instead.
Vergence (the angular rotation of the eyes) is thought to provide essential distance information for size constancy (perceiving an object as having a constant physical size). Evidence for this comes from the fact that a target with a constant retinal size appears to shrink as the rotation of the eyes increases (indicating that the target has reduced in distance). This reduction in perceived size is supposed to maintain a constant perception of physical size in natural viewing conditions by cancelling out the increasing size of the retinal image as an object moves closer. Whilst this hypothesis has been extensively tested over the last 200 years, it has always been tested in the presence of confounding cues such as a changing retinal image or cognitive cues to distance.Testing members of the public with normal vision, we control for these confounding cues and find no evidence of vergence size constancy. Statement of RelevanceThis work has important implications for the neural basis of size constancy and for multisensory integration. First, leading work on the neural basis of size constancy cannot differentiate between recurrent processes in V1 (based on triangulation cues, such as vergence) and top-down processing (based on pictorial cues). Since our work challenges the existence of vergence size constancy, and therefore much of the basis of the recurrent processing account, our work indicates that top-down processing is likely to have a much more important role in size constancy than previously thought.Second, vergence size constancy is thought to be largely responsible for the apparent integration of the retinal image with proprioceptive distance information from the hand in the Taylor illusion (an afterimage of the hand viewed in darkness appears to shrink as the observer's hand is moved closer). This explanation for the Taylor illusion is challenged, and a cognitive account proposed instead.
Having an optimal quality of vision as well as adequate cognitive capacities is known to be essential for driving safety. However, the interaction between vision and cognitive mechanisms while driving remains unclear. We hypothesized that, in a context of high cognitive load, reduced visual acuity would have a negative impact on driving behavior, even when the acuity corresponds to the legal threshold for obtaining a driving license in Canada, and that the impact observed on driving performance would be greater with the increase in the threshold of degradation of visual acuity. In order to investigate this relationship, we examined driving behavior in a driving simulator under optimal and reduced vision conditions through two scenarios involving different levels of cognitive demand. These were: 1. a simple rural driving scenario with some pre-programmed events and 2. a highway driving scenario accompanied by a concurrent task involving the use of a navigation device. Two groups of visual quality degradation (lower/ higher) were evaluated according to their driving behavior. The results support the hypothesis: A dual task effect was indeed observed provoking less stable driving behavior, but in addition to this, by statistically controlling the impact of cognitive load, the effect of visual load emerged in this dual task context. These results support the idea that visual quality degradation impacts driving behavior when combined with a high mental workload driving environment while specifying that this impact is not present in the context of low cognitive load driving condition.
In a previous series of experiments using virtual stimuli, we found evidence that 3D shape estimation agrees to a superadditivity rule of depth-cue combination. According to this rule, adding depth cues leads to greater perceived depth magnitudes and, in principle, to depth overestimation. The mechanism underlying the superadditivity effect can be fully accounted for by a normative theory of cue integration, through the adaptation of a model of cue integration termed the Intrinsic Constraint (IC) model. As for its nature, it remains unclear whether superadditivity is a byproduct of the artificial nature of virtual environments, causing explicit reasoning to infiltrate behavior and inflate the depth judgments when a scene is richer in depth cues, or the genuine output of the process of depth-cue integration. In the present study, we addressed this question by testing whether the IC model's prediction of superadditivity generalizes beyond VR environments to real world situations. We asked participants to judge the perceived 3D shape of cardboard prisms through a matching task. To assay the potential influence of explicit control over those perceptual estimates, we also asked participants to reach and hold the same objects with their fingertips and we analyzed the in-flight grip size during the reaching. Using physical objects ensured that all visual information was fully consistent with the stimuli's 3D structure without computer-generated artifacts. We designed a novel technique to carefully control binocular and monocular 3D cues independently from one another, allowing to add or remove depth information from the scene seamlessly. Even with real objects, participants exhibited a clear superadditivity effect in both explicit and implicit tasks. Furthermore, the magnitude of this effect was accurately predicted by the IC model. These results confirm that superadditivity is an inherent feature of depth estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.