This is a repository copy of Real-time Facial Animation with Image-based Dynamic Avatars.
Interesting textures form on the surfaces of objects as the result of external chemical, mechanical, and biological agents. Simulating these textures is necessary to generate models for realistic image synthesis. The textures formed are progressively variant, with the variations depending on the global and local geometric context. We present a method for capturing progressively varying textures and the relevant context parameters that control them. By relating textures and context parameters, we are able to transfer the textures to novel synthetic objects. We present examples of capturing chemical effects, such as rusting; mechanical effects, such as paint cracking; and biological effects, such as the growth of mold on a surface. We demonstrate a user interface that provides a method for specifying where an object is exposed to external agents. We show the results of complex, geometry-dependent textures evolving on synthetic objects.
We propose a novel framework that automatically learns the lighting patterns for efficient reflectance acquisition, as well as how to faithfully reconstruct spatially varying anisotropic BRDFs and local frames from measurements under such patterns. The core of our framework is an asymmetric deep autoencoder, consisting of a nonnegative, linear encoder which directly corresponds to the lighting patterns used in physical acquisition, and a stacked, nonlinear decoder which computationally recovers the BRDF information from captured photographs. The autoencoder is trained with a large amount of synthetic reflectance data, and can adapt to various factors, including the geometry of the setup and the properties of appearance. We demonstrate the effectiveness of our framework on a wide range of physical materials, using as few as 16 ~ 32 lighting patterns, which correspond to 12 ~ 25 seconds of acquisition time. We also validate our results with the ground truth data and captured photographs. Our framework is useful for increasing the efficiency in both novel and existing acquisition setups.
We introduce a novel four-view image-based hair modeling method. Given four hair images taken from the front, back, left and right views as input, we first estimate the rough 3D shape of the hair observed in the input using a predefined database of 3D hair models, then synthesize a hair texture on the surface of the shape, from which the hair growing direction information is calculated and used to construct a 3D direction field in the hair volume. Finally, we grow hair strands from the scalp, following the direction field, to produce the 3D hair model, which closely resembles the hair in all input images. Our method does not require that all input images are from the same hair, enabling an effective way to create compelling hair models from images of considerably different hairstyles at different views. We demonstrate the efficacy of our method using a wide range of examples.
We propose a novel framework that automatically learns the lighting patterns for efficient, joint acquisition of unknown reflectance and shape. The core of our framework is a deep neural network, with a shared linear encoder that directly corresponds to the lighting patterns used in physical acquisition, as well as non-linear decoders that output per-pixel normal and diffuse / specular information from photographs. We exploit the diffuse and normal information from multiple views to reconstruct a detailed 3D shape, and then fit BRDF parameters to the diffuse / specular information, producing texture maps as reflectance results. We demonstrate the effectiveness of the framework with physical objects that vary considerably in reflectance and shape, acquired with as few as 16 ~ 32 lighting patterns that correspond to 7 ~ 15 seconds of per-view acquisition time. Our framework is useful for optimizing the efficiency in both novel and existing setups, as it can automatically adapt to various factors, including the geometry / the lighting layout of the device and the properties of appearance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.