We present a fully automatic framework that digitizes a complete 3D head with hair from a single unconstrained image. Our system offers a practical and consumer-friendly end-to-end solution for avatar personalization in gaming and social VR applications. The reconstructed models include secondary components (eyes, teeth, tongue, and gums) and provide animation-friendly blendshapes and joint-based rigs. While the generated face is a high-quality textured mesh, we propose a versatile and efficient polygonal strips (polystrips) representation for the hair. Polystrips are suitable for an extremely wide range of hairstyles and textures and are compatible with existing game engines for real-time rendering. In addition to integrating state-of-the-art advances in facial shape modeling and appearance inference, we propose a novel single-view hair generation pipeline, based on 3D-model and texture retrieval, shape refinement, and polystrip patching optimization. The performance of our hairstyle retrieval is enhanced using a deep convolutional neural network for semantic hair attribute classification. Our generated models are visually comparable to state-of-the-art game characters designed by professional artists. For real-time settings, we demonstrate the flexibility of polystrips in handling hairstyle variations, as opposed to conventional strand-based representations. We further show the effectiveness of our approach on a large number of images taken in the wild, and how compelling avatars can be easily created by anyone.
We present two contributions to the area of volumetric rendering. We develop a novel, comprehensive theory of volumetric radiance estimation that leads to several new insights and includes all previously published estimates as special cases. This theory allows for estimating in-scattered radiance at a point, or accumulated radiance along a camera ray, with the standard photon particle representation used in previous work. Furthermore, we generalize these operations to include a more compact, and more expressive intermediate representation of lighting in participating media, which we call "photon beams." The combination of these representations and their respective query operations results in a collection of nine distinct volumetric radiance estimates.Our second contribution is a more efficient rendering method for participating media based on photon beams. Even when shooting and storing less photons and using less computation time, our method significantly reduces both bias (blur) and variance in volumetric radiance estimation. This enables us to render sharp lighting details (e.g. volume caustics) using just tens of thousands of photon beams, instead of the millions to billions of photon points required with previous methods.
This article introduces a practical shading model for cloth that can simulate both anisotropic highlights as well as the complex color shifts seen in cloth made of different colored threads. Our model is based on extensive Bidirectional Reflectance Distribution Function (BRDF) measurements of several cloth samples. We have also measured the scattering profile of several different individual cloth threads. Based on these measurements, we derived an empirical shading model capable of predicting the light scattering profile of a variety of threads. From individual threads, we synthesized a woven cloth model, which provides an intuitive description of the layout of the constituent threads as well as their tangent directions. Our model is physically plausible, accounting for shadowing and masking by the threads. We validate our model by comparing predicted and measured light scattering values and show how it can reproduce the appearance of many cloth and thread types, including silk, velvet, linen, and polyester. The model is robust, easy to use, and can simulate the appearance of complex highlights and color shifts that cannot be fully handled by existing models.
In this article, we derive a physically-based model for simulating rainbows. Previous techniques for simulating rainbows have used either geometric optics (ray tracing) or Lorenz-Mie theory. Lorenz-Mie theory is by far the most accurate technique as it takes into account optical effects such as dispersion, polarization, interference, and diffraction. These effects are critical for simulating rainbows accurately. However, as Lorenz-Mie theory is restricted to scattering by spherical particles, it cannot be applied to real raindrops which are nonspherical, especially for larger raindrops. We present the first comprehensive technique for simulating the interaction of a wavefront of light with a physically-based water drop shape. Our technique is based on ray tracing extended to account for dispersion, polarization, interference, and diffraction. Our model matches Lorenz-Mie theory for spherical particles, but it also enables the accurate simulation of nonspherical particles. It can simulate many different rainbow phenomena including double rainbows and This research has been partially funded by NSF Project GreenLight (award no. 0821155), a Marie Curie grant from the Seventh Framework Programme (grant agreement no. 251415), the Spanish Ministry of Science and Technology (TIN2010-21543) and the Gobierno de Aragn (projects OTRI 2009/0411 and CTPP05/09). Authors' addresses: I. Sadeghi (corresponding author), University of California, San Diego; email: iman@graphics.ucsd.edu; A. Munoz, Universidad de Zaragoza; P. Laven, Horley, UK; W. Jarosz, Disney Research Zürich and University of California, San Diego; F. Seron and D. Gutierrez, Universidad de Zaragoza; H. W. Jensen, University of California, San Diego. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org. supernumerary bows. We show how the nonspherical raindrops influence the shape of the rainbows, and we provide a simulation of the rare twinned rainbow, which is believed to be caused by nonspherical water drops.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.