How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of each cue into the combination process, MLE provides more reliable estimates of slant than would be available from either cue alone. We measured the reliability of each cue in isolation across a range of slants and distances using a slant-discrimination task. The reliability of the texture cue increases as |slant| increases and does not change with distance. The reliability of the disparity cue decreases as distance increases and varies with slant in a way that also depends on viewing distance. The trends in the single-cue data can be understood in terms of the information available in the retinal images and issues related to solving the binocular correspondence problem. To test the MLE model, we measured perceived slant of two-cue stimuli when disparity and texture were in conflict and the reliability of slant estimation when both cues were available. Results from the two-cue study indicate, consistent with the MLE model, that observers weight each cue according to its relative reliability: Disparity weight decreased as distance and |slant| increased. We also observed the expected improvement in slant estimation when both cues were available. With few discrepancies, our data indicate that observers combine cues in a statistically optimal fashion and thereby reduce the variance of slant estimates below that which could be achieved from either cue alone. These results are consistent with other studies that quantitatively examined the MLE model of cue combination. Thus, there is a growing empirical consensus that MLE provides a good quantitative account of cue combination and that sensory information is used in a manner that maximizes the precision of perceptual estimates.
Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.
Expanding the US Food and Drug Administration–approved indications for immune checkpoint inhibitors in patients with cancer has resulted in therapeutic success and immune-related adverse events (irAEs). Neurologic irAEs (irAE-Ns) have an incidence of 1%–12% and a high fatality rate relative to other irAEs. Lack of standardized disease definitions and accurate phenotyping leads to syndrome misclassification and impedes development of evidence-based treatments and translational research. The objective of this study was to develop consensus guidance for an approach to irAE-Ns including disease definitions and severity grading. A working group of four neurologists drafted irAE-N consensus guidance and definitions, which were reviewed by the multidisciplinary Neuro irAE Disease Definition Panel including oncologists and irAE experts. A modified Delphi consensus process was used, with two rounds of anonymous ratings by panelists and two meetings to discuss areas of controversy. Panelists rated content for usability, appropriateness and accuracy on 9-point scales in electronic surveys and provided free text comments. Aggregated survey responses were incorporated into revised definitions. Consensus was based on numeric ratings using the RAND/University of California Los Angeles (UCLA) Appropriateness Method with prespecified definitions. 27 panelists from 15 academic medical centers voted on a total of 53 rating scales (6 general guidance, 24 central and 18 peripheral nervous system disease definition components, 3 severity criteria and 2 clinical trial adjudication statements); of these, 77% (41/53) received first round consensus. After revisions, all items received second round consensus. Consensus definitions were achieved for seven core disorders: irMeningitis, irEncephalitis, irDemyelinating disease, irVasculitis, irNeuropathy, irNeuromuscular junction disorders and irMyopathy. For each disorder, six descriptors of diagnostic components are used: disease subtype, diagnostic certainty, severity, autoantibody association, exacerbation of pre-existing disease or de novo presentation, and presence or absence of concurrent irAE(s). These disease definitions standardize irAE-N classification. Diagnostic certainty is not always directly linked to certainty to treat as an irAE-N (ie, one might treat events in the probable or possible category). Given consensus on accuracy and usability from a representative panel group, we anticipate that the definitions will be used broadly across clinical and research settings.
Several investigators have claimed that the retinal coordinates of corresponding points shift with vergence eye movements. Two kinds of shifts have been reported. First, global shifts that increase with retinal eccentricity; such shifts would cause a flattening of the horopter at all viewing distances and would facilitate fusion of flat surfaces. Second, local shifts that are centered on the fovea; such shifts would cause a dimple in the horopter near fixation and would facilitate fusion of points fixated at extreme viewing distances. Nearly all of the empirical evidence supporting shifts of corresponding points comes from horopter measurements and from comparisons of subjective and objective fixation disparity. In both cases, the experimenter must infer the retinal coordinates of corresponding points from external measurements. We describe four factors that could affect this inference: (1) changes in the projection from object to image points that accompany eye rotation and accommodation, (2) fixation errors during the experimental measurements, (3) non-uniform retinal stretching, and (4) changes in the perceived direction of a monocular point when presented adjacent to a binocular point. We conducted two experiments that eliminated or compensated for these potential errors. In the first experiment, observers aligned dichoptic test lines using an apparatus and procedure that eliminated all but the third error. In the second experiment, observers judged the alignment of dichoptic afterimages, and this technique eliminates all the errors. The results from both experiments show that the retinal coordinates of corresponding points do not change with vergence eye movements. We conclude that corresponding points are in fixed retinal positions for observers with normal retinal correspondence.
The distribution of empirical corresponding points in the two retinas has been well studied along the horizontal and the vertical meridians, but not in other parts of the visual field. Using an apparent-motion paradigm, we measured the positions of those points across the central portion of the visual field. We found that the Hering-Hillebrand deviation (a deviation from the Vieth-Müller circle) and the Helmholtz shear of horizontal disparity (backward slant of the vertical horopter) exist throughout the visual field. We also found no evidence for non-zero vertical disparities in empirical corresponding points. We used the data to find the combination of points in space and binocular eye position that minimizes the disparity between stimulated points on the retinas and the empirical corresponding points. The optimum surface is a top-back slanted surface at medium to far distance depending on the observer. The line in the middle of the surface extending away from the observer comes very close to lying in the plane of the ground as the observer fixates various positions in the ground, a speculation Helmholtz made that has since been misunderstood.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.