1979
DOI: 10.1038/scientificamerican0779-136
|View full text |Cite
|
Sign up to set email alerts
|

The Visual Perception of Motion in Depth

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
65
0

Year Published

1980
1980
1996
1996

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 141 publications
(69 citation statements)
references
References 0 publications
4
65
0
Order By: Relevance
“…In Experiment 3, where depth order should have been unambiguously provided by motion perspective, the addition of changing-size cues further improved vection. These findings sit well with the idea that changing-size and stereoscopic motion channels converge at the same motion-in-depth stage of the visual system (Regan & Beverley, 1979;Regan et al, 1979aRegan et al, , 1979b). …”
Section: Discussionsupporting
confidence: 86%
See 1 more Smart Citation
“…In Experiment 3, where depth order should have been unambiguously provided by motion perspective, the addition of changing-size cues further improved vection. These findings sit well with the idea that changing-size and stereoscopic motion channels converge at the same motion-in-depth stage of the visual system (Regan & Beverley, 1979;Regan et al, 1979aRegan et al, , 1979b). …”
Section: Discussionsupporting
confidence: 86%
“…Regan and his colleagues have argued that changingsize and stereoscopic motion stimuli generate signals that converge at the same "motion-in-depth stage" of the visual system (Regan & Beverley, 1979;Regan, Beverley, & Cynader, 1979a, 1979b. They showed that if a stimulus' changing-size and changing-disparity cues indicated opposite directions of motion in depth, it was possible to completely cancel the impression of motion in depth.…”
Section: Methodsmentioning
confidence: 99%
“…Regan, Beverley and Cynader (1979) have sLudled visual guidance of locomotion and have confirmed Gibson's report (Gibson, 1950;Gibson et al, 1955) that the center of expansion provides information about the direction of locomotion. Llewellyn (1971) and Gregory (1976), on the other hand, have reported that subjects who are instructed to do so cannot accurately locate the center of expansion in a random dot display.…”
Section: Expansion Point In Landingsupporting
confidence: 56%
“…The purpose of this proposed research is to examine the differences in the two situations to determine whether it is possible to use the center of expansion to locate the landing spot. While not specifically a "depth cue," in that it does not directly convey information about absolute or relative depth, the center of expansion has been studied as a referent for visual guidance of locomotion in three dimensions (Gibson, 1950;Gibson et al, 1955;Regan et al, 1979). Moreover, re,4ponses from pilots indicate that they feel it is a useful cue to a safe approach and landing.…”
Section: Expansion Point In Landingmentioning
confidence: 99%
“…The second hypothesis we proposed to account for cases in which conjunction search is fast or even independent of display size was that, for certain pairs of dimensions, there might after all exist a number of specialized detectors coding conjunctions of values as integral perceptual units• Likely candidates are those pairings that signal important variables in the real three-dimensional world, for example "looming" (pairs of diverging parallel edges) or "shape from shading" (luminance or texture gradients created by changing illumination on solid objects) • Regan, Beverley, and Cynader (1979) have in fact found single units that appear selectively to code Can we predict from the physiological evidence which pairs of dimensions are most likely to be coded as conjunctions? Single units in many visual areas (V1, V2, V4) do appear to be tuned to different combinations of particular orientations with particular spatial frequencies or particular directions of motion (Desimone et al 1985;DeValois, Albrecht, & Thorell, 1982).…”
Section: The Conjunction Detector Hypothesismentioning
confidence: 99%