We present an image-based technique to accelerate the navigation in complex static environments. We perform an image-space simpli cation of each sample of the scene taken at a particular viewpoint and dynamically combine these simpli ed samples to produce images for arbitrary viewpoints. Since the scene is converted into a bounded complexity representation in the image space, with the base images rendered beforehand, the rendering speed is relatively insensitive to the complexity of the scene. The proposed method correctly simulates the kinetic depth e ect parallax, occlusion, and can resolve the missing visibility information. This paper describes a suitable representation for the samples, a speci c technique for simplifying them, and di erent morphing methods for combining the sample information to reconstruct the scene. We use hardware texture mapping to implement the image-space warping and hardware a ne transformations to compute the viewpoint-dependent w arping function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.