Abstract-A review of the facts about human stereo vision leads to the conclusion that the human stereo processing mechanism is very flexible in the presence of other depth cues. Stereopsis seems to provide only local additional depth information, rather than defining the overall 3D geometry of a perceived scene. This paper reports on an experimental approach to adjusting stereo parameters automatically and thereby providing a low eye strain, easily accommodated stereo view for computer graphics applications. To this end the concept of virtual eye separation is defined. Experiment 1 shows that dynamic changes in virtual eye separation are not noticed if they occur over a period of a few seconds. Experiment 2 shows that when subjects are given control over their virtual eye separation, they change it depending on the amount of depth in the scene. Based partly on these results, an algorithm is presented for enhancing stereo depth cues for moving computer generated 3D images. It has the effect of doubling the stereo depth in flat scenes and limiting the stereo depth for deep scenes. It also reduces the occurrence of double images and the discrepancy between focus and vergence. The algorithm is applied dynamically in real time with an optional damping factor applied so the disparities never change too abruptly. Finally Experiment 3 provides a qualitative assessment of the algorithm with a dynamic "flight" over a digital elevation map.
This paper presents an algorithm for enhancing stereo depth cues for moving computer generated 3D images. The algorithm incorporates the results from an experiment in which observers were allowed to set their preferred eye separation with a set of moving scenes. The data derived from this experiment were used to design an algorithm for the dynamic adjustment of eye separation (or disparity) depending on the scene characteristics. The algorithm has the following steps 1) Determine the near and far points in the computer graphics scene to be displayed. This is done by sampling the Z buffer. 2) Scale the scene about a point corresponding to the midpoint between the observer's two eyes. This scaling factor is calculated so that the nearest part of the scene comes to be located just behind the monitor. 3) Adjust an eye separation parameter to create stereo depth according to the empirical function derived from the initial study. This has the effect of doubling the stereo depth in flat scene but limiting the stereo depth for deep scenes. Steps 2 and 3 both have the effect of reducing the discrepancy between focus and vergence for most scenes. The algorithm is applied dynamically in real time with a damping factor applied so the disparities never change too abruptly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.