Realistic modeling of reverberant sound in 3D virtual worlds provides users with important cues for localizing sound sources and understanding spatial properties of the environment. Unfortunately, current geometric acoustic modeling systems do not accurately simulate reverberant sound. Instead, they model only direct transmission and specular reflection, while diffraction is either ignored or modeled through statistical approximation. However, diffraction is important for correct interpretation of acoustic environments, especially when the direct path between sound source and receiver is occluded.The Uniform Theory of Diffraction (UTD) extends geometrical acoustics with diffraction phenomena: illuminated edges become secondary sources of diffracted rays that in turn may propagate through the environment. In this paper, we propose an efficient way for computing the acoustical effect of diffraction paths using the UTD for deriving secondary diffracted rays and associated diffraction coefficients. Our main contributions are: 1) a beam tracing method for enumerating sequences of diffracting edges efficiently and without aliasing in densely occluded polyhedral environments; 2) a practical approximation to the simulated sound field in which diffraction is considered only in shadow regions; and 3) a real-time auralization system demonstrating that diffraction dramatically improves the quality of spatialized sound in virtual environments.
PhotographRendering Original Model Appearance Change Figure 1: Photograph compared to a face rendered using our skin reflectance model. The rendered image was composited on top of the photograph. Right: Changing the albedo and BRDF using statistics of measured model parameters from a sample population. AbstractWe have measured 3D face geometry, skin reflectance, and subsurface scattering using custom-built devices for 149 subjects of varying age, gender, and race. We developed a novel skin reflectance model whose parameters can be estimated from measurements. The model decomposes the large amount of measured skin data into a spatially-varying analytic BRDF, a diffuse albedo map, and diffuse subsurface scattering. Our model is intuitive, physically plausible, and -since we do not use the original measured data -easy to edit as well. High-quality renderings come close to reproducing real photographs. The analysis of the model parameters for our sample population reveals variations according to subject age, gender, skin type, and external factors (e.g., sweat, cold, or makeup). Using our statistics, a user can edit the overall appearance of a face (e.g., changing skin type and age) or change small-scale features using texture synthesis (e.g., adding moles and freckles). We are making the collected statistics publicly available to the research community for applications in face synthesis and analysis.
A difficult challenge in geometrical acoustic modeling is computing propagation paths from sound sources to receivers fast enough for interactive applications. This paper describes a beam tracing method that enables interactive updates of propagation paths from a stationary source to a moving receiver in large building interiors. During a precomputation phase, convex polyhedral beams traced from the location of each sound source are stored in a "beam tree" representing the regions of space reachable by potential sequences of transmissions, diffractions, and specular reflections at surfaces of a 3D polygonal model. Then, during an interactive phase, the precomputed beam tree(s) are used to generate propagation paths from the source(s) to any receiver location at interactive rates. The key features of this beam tracing method are (1) it scales to support large building environments, (2) it models propagation due to edge diffraction, (3) it finds all propagation paths up to a given termination criterion without exhaustive search or risk of under-sampling, and (4) it updates propagation paths at interactive rates. The method has been demonstrated to work effectively in interactive acoustic design and virtual walkthrough applications.
PhotographRendering Original Model Appearance Change Figure 1: Photograph compared to a face rendered using our skin reflectance model. The rendered image was composited on top of the photograph. Right: Changing the albedo and BRDF using statistics of measured model parameters from a sample population. AbstractWe have measured 3D face geometry, skin reflectance, and subsurface scattering using custom-built devices for 149 subjects of varying age, gender, and race. We developed a novel skin reflectance model whose parameters can be estimated from measurements. The model decomposes the large amount of measured skin data into a spatially-varying analytic BRDF, a diffuse albedo map, and diffuse subsurface scattering. Our model is intuitive, physically plausible, and -since we do not use the original measured data -easy to edit as well. High-quality renderings come close to reproducing real photographs. The analysis of the model parameters for our sample population reveals variations according to subject age, gender, skin type, and external factors (e.g., sweat, cold, or makeup). Using our statistics, a user can edit the overall appearance of a face (e.g., changing skin type and age) or change small-scale features using texture synthesis (e.g., adding moles and freckles). We are making the collected statistics publicly available to the research community for applications in face synthesis and analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.