1995
DOI: 10.1037/0096-1523.21.3.679
|View full text |Cite
|
Sign up to set email alerts
|

Comparing depth from motion with depth from binocular disparity.

Abstract: The accuracy of depth judgments that are based on binocular disparity or structure from motion (motion parallax and object rotation) was studied in 3 experiments. In Experiment 1, depth judgments were recorded for computer simulations of cones specified by binocular disparity, motion parallax, or stereokinesis. In Experiment 2, judgments were recorded for real cones in a structured environment, with depth information from binocular disparity, motion parallax, or object rotation about the y-axis. In both of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

10
43
0

Year Published

1997
1997
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(53 citation statements)
references
References 89 publications
(176 reference statements)
10
43
0
Order By: Relevance
“…At near viewing distances, they could have obtained some information about the object from motion parallax in addition to binocular disparity. However, this is unlikely, since disparity provides more reliable information than does motion parallax from small head movements (Durgin, Proffitt, Olson, & Reinke, 1995). At far viewing distances, the natural viewing condition had no advantages over photographic viewing, which indicates that the information in the frozen optic arrays of the pictures that were used was optimally exploited for the task.…”
Section: Discussionmentioning
confidence: 99%
“…At near viewing distances, they could have obtained some information about the object from motion parallax in addition to binocular disparity. However, this is unlikely, since disparity provides more reliable information than does motion parallax from small head movements (Durgin, Proffitt, Olson, & Reinke, 1995). At far viewing distances, the natural viewing condition had no advantages over photographic viewing, which indicates that the information in the frozen optic arrays of the pictures that were used was optimally exploited for the task.…”
Section: Discussionmentioning
confidence: 99%
“…We used lateral head motions that were 1.5 times the interocular distance. It is possible that motion parallax would have been more effective if we had used larger lateral head motions, such as those used, for example, by Durgin et al (1995) which were 4 times the interocular distance. However, the difference between motionparallax-alone and stereopsis-alone conditions in the current experiment was very large and unlikely to be eliminated, even if we had used larger lateral head motions.…”
Section: Discussionmentioning
confidence: 99%
“…Vertical disparity could also be a distance scaling factor when targets have a vertical extent (Mayhew and Longuet-Higgins 1982;Gillam and Lawergren 1983), which these LED stimuli did not. Several other studies (Durgin et al 1995;McKee and Taylor 2010) have, like ours, used conditions in which the room was lit. Both studies found that depth from disparity was far more veridical in a geometric sense than depth from motion parallax, with the Durgin et al study showing good distance scaling of the former as observation distance varied.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, two views of four noncoplanar points are sufficient for some perceptual tasks like rigidity judgments (subsequently also shown to be possible mathematically by Bennett, Hoffman, Nicola, & Prakash, 1989, and, independently, by Huang & Lee, 1989. Experiments with other phenomena such as the stereokinetic effect, motion parallax, and depth from disparity also suggest that the human visual system is able to derive estimates of depth magnitudes in situations where it seems impossible on the basis of the laws of projective geometry alone (e.g., Caudek & Proffitt, 1993;Durgin, Proffitt, Olson, & Reinke, 1995;Proffitt, Rock, Hecht, & Schubert, 1992).…”
Section: Human Performance Versus Mathematical Constraintsmentioning
confidence: 99%