We present a novel approach to recording and computing panorama light fields. In contrast to previous methods that estimate panorama light fields from focal stacks or naive multi‐perspective image stitching, our approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features. Arbitrarily complex scenes can therefore be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces.
Figure 1: A 2.54 gigaray, 360 • panoramic light field (spatial resolution: 17,885×1,260 pixels, angular resolution: 11×11, 7.61 GB) at two different focus settings (top: far; center: near), and close-ups in native resolution (bottom), rendered at 8-43 fps (full aperture -smallest aperture, at a render resolution of 1280×720) using off-the-shelf graphics hardware. AbstractWe present a caching framework with a novel probability-based prefetching and eviction strategy applied to atomic cache units that enables interactive rendering of gigaray light fields. Further, we describe two new use cases that are supported by our framework: panoramic light fields, including a robust imaging technique and an appropriate parameterization scheme for real-time rendering and caching; and light-field-cached volume rendering, which supports interactive exploration of large volumetric datasets using light-field rendering. We consider applications such as light-field photography and the visualization of large image stacks from modern scanning microscopes.
Figure 1: Light-field retargeting allows non-linear scaling while retaining angular consistency without the need to reconstruct depth information. The images show center and off-center perspectives rendered with narrow and wide apertures before and after retargeting, and with linear scaling. Dynamic perspective and focus changes are shown in the supplementary video. AbstractWe present a first approach to light-field retargeting using z-stack seam carving, which allows light-field compression and extension while retaining angular consistency. Our algorithm first converts an input light field into a set of perspective-sheared focal stacks. It then applies 3D deconvolution to convert the focal stacks into z-stacks, and seam-carves the z-stack of the center perspective. The computed seams of the center perspective are sheared and applied to the z-stacks of all off-center perspectives. Finally, the carved z-stacks are converted back into the perspective images of the output light field. To our knowledge, this is the first approach to light-field retargeting. Unlike existing stereo-pair retargeting or 3D retargeting techniques, it does not require depth information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.