Abstract-We present an application of interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings. Images of a model in the real world, captured by a camera above the scene, are processed to construct a virtual 3D model. To achieve interactive rendering rates, we use a hybrid rendering technique, leveraging radiosity to simulate the inter-reflectance between diffuse patches and shadow volumes to generate per-pixel direct illumination. The rendered images are then projected on the real model by four calibrated projectors to help users study the daylighting illumination. The virtual heliodon is a physical design environment in which multiple designers, a designer and a client, or a teacher and students can gather to experience animated visualizations of the natural illumination within a proposed design by controlling the time of day, season, and climate. Furthermore, participants may interactively redesign the geometry and materials of the space by manipulating physical design elements and see the updated lighting simulation.
When projectors are used to display images on complex, non-planar surface geometry, indirect illumination between the surfaces will disrupt the final appearance of this imagery, generally increasing brightness, decreasing contrast, and washing out colors. In this paper we predict through global illumination simulation this unintentional indirect component and solve for the optimal compensated projection imagery that will minimize the difference between the desired imagery and the actual total illumination in the resulting physical scene. Our method makes use of quadratic programming to minimize this error within the constraints of the physical system, namely, that negative light is physically impossible. We demonstrate our compensation optimization in both computer simulation and physical validation within a table-top spatially augmented reality system. We present an application of these results for visualization of interior architectural illumination. To facilitate interactive modifications to the scene geometry and desired appearance, our system is accelerated with a CUDA implementation of the QP optimization method.
We present a novel approach to 3D object detection in scenes scanned by LiDAR sensors, based on a probabilistic representation of free, occupied, and hidden space that extends the concept of occupancy grids from robot mapping algorithms. This scene representation naturally handles Li-DAR sampling issues, can be used to fuse multiple LiDAR data sets, and captures the inherent uncertainty of the data due to occlusions and clutter. Using this model, we formulate a hypothesis testing methodology to determine the probability that given 3D objects are present in the scene. By propagating uncertainty in the original sample points, we are able to measure confidence in the detection results in a principled way. We demonstrate the approach in examples of detecting objects that are partially occluded by scene clutter such as camouflage netting.
Figure 1. Our immersive and dynamic projection environment enables users to control visualization by manipulating the position and orientation of the projection surfaces. We present a variety of interactive demonstration applications leveraging our new interface. AbstractWe present a system for dynamic projection on large, human-scale, moving projection screens and demonstrate this system for immersive visualization applications in several fields. We have designed and implemented efficient, low-cost methods for robust tracking of projection surfaces, and a method to provide high frame rate output for computationally-intensive, low frame rate applications. We present a distributed rendering environment which allows many projectors to work together to illuminate the projection surfaces. This physically immersive visualization environment promotes innovation and creativity in design and analysis applications and facilitates exploration of alternative visualization styles and modes. The system provides for multiple participants to interact in a shared environment in a natural manner. Our new human-scale user interface is intuitive and novice users require essentially no instruction to operate the visualization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.