We present a novel photographic technique called dual photography, which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location. The technique is completely image-based, requiring no knowledge of scene geometry or surface properties, and by its nature automatically includes all transport paths, including shadows, inter-reflections and caustics. In its simplest form, the technique can be used to take photographs without a camera; we demonstrate this by capturing a photograph using a projector and a photo-resistor. If the photo-resistor is replaced by a camera, we can produce a 4D dataset that allows for relighting with 2D incident illumination. Using an array of cameras we can produce a 6D slice of the 8D reflectance field that allows for relighting with arbitrary light fields. Since an array of cameras can operate in parallel without interference, whereas an array of light sources cannot, dual photography is fundamentally a more efficient way to capture such a 6D dataset than a system based on multiple projectors and one camera. As an example, we show how dual photography can be used to capture and relight scenes.
Figure 1:The techniques in this paper employ two computer-assisted optical effects: synthetic aperture photography and synthetic aperture illumination. On the left, we aim a camera at an array of planar mirrors, yielding 22 different views of a statuette partially obscured by a plant. By rectifying, shifting, and adding these views together, we simulate a camera with a wide aperture and a shallow depth of field. Using appropriate shifts, we can position the focal plane of this synthetic camera astride the statuette, blurring out the plant. On the right we replace the camera with a video projector. By shifting, keystoning, and projecting multiple copies of a binary pattern, we produce a real image with a similarly shallow depth of field. Using appropriate shifts, we can position this image astride the statuette. On this plane the image is well focused; elsewhere, it is blurry. AbstractConfocal microscopy is a family of imaging techniques that employ focused patterned illumination and synchronized imaging to create cross-sectional views of 3D biological specimens. In this paper, we adapt confocal imaging to large-scale scenes by replacing the optical apertures used in microscopy with arrays of real or virtual video projectors and cameras. Our prototype implementation uses a video projector, a camera, and an array of mirrors. Using this implementation, we explore confocal imaging of partially occluded environments, such as foliage, and weakly scattering environments, such as murky water. We demonstrate the ability to selectively image any plane in a partially occluded environment, and to see further through murky water than is otherwise possible. By thresholding the confocal images, we extract mattes that can be used to selectively illuminate any plane in the scene.
In this paper, we introduce a novel system for browsing, enhancing, and manipulating casual outdoor photographs by combining them with already existing georeferenced digital terrain and urban models. A simple interactive registration process is used to align a photograph with such a model. Once the photograph and the model have been registered, an abundance of information, such as depth, texture, and GIS data, becomes immediately available to our system. This information, in turn, enables a variety of operations, ranging from dehazing and relighting the photograph, to novel view synthesis, and overlaying with geographic information. We describe the implementation of a number of these applications and discuss possible extensions. Our results show that augmenting photographs with already available 3D models of the world supports a wide variety of new ways for us to experience and interact with our everyday snapshots.
While onboard navigation systems are gaining in importance, maps are still the medium of choice for laying out a route to a destination and for way finding. However, even with a map, one is almost always more comfortable navigating a route the second time due to the visual memory of the route. To make the first time navigating a route feel more familiar, we present a system that integrates a map with a video automatically constructed from panoramic imagery captured at close intervals along the route. The routing information is used to create a variable speed video depicting the route. During playback of the video, the frame and field of view are dynamically modulated to highlight salient features along the route and connect them back to the map. A user interface is demonstrated to allow exploration of the combined map, video, and textual driving directions. We discuss the construction of the hybrid map and video interface. Finally, we report the resu lts of a study that provides evidence of the effectiveness of such a system for route following.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.