Point-based graphics has gained much attention as an alternative to polygon-based approaches because of its simplicity and flexibility. However, current point-based techniques do not provide a sufficient rendering quality for translucent materials such as human skin. In this paper, we propose a point-based framework with subsurface scattering of light, which is important to create the soft and semi-translucent appearance of human skin. To accurately simulate subsurface scattering in multilayered materials, we present splat-based diffusion to apply a linear combination of several Gaussian basis functions to each splat in object space. Compared to existing point-based approaches, our method offers a significantly improved visual quality in rendering human faces and provides a similar visual quality to polygon-based rendering using the texture space diffusion technique. We demonstrate the effectiveness of our approach in rendering scanned faces realistically.Keywords subsurface scattering, point-based rendering, skin rendering, diffusion profile, sum of Gaussian Citation Kim H J, Bickel B, Gross M, et al. Subsurface scattering using splat-based diffusion in point-based rendering. Sci China Inf Sci, 2010, 53: 911-919,
Recent advances in facial scanning technology provide highly detailed faces, including fine wrinkles. However, because of the increasing complexity of the resulting models, it is necessary to reduce the amount of data while preserving small‐scale facial features. In this paper, we propose a new adaptive surface splatting method to reduce the number of splats by optimizing the size and shape of splats using geometric and color information. Using the optimized splats, we can improve the rendering quality, especially in the visually sensitive feature areas. Our adaptive surface splatting is very effective to render massive facial data on tablet PCs or smartphones. Copyright © 2012 John Wiley & Sons, Ltd.
We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.
Figure 1: We use statistics extracted from example faces to augment interactively drawn concept sketches for synthesizing realistic facial wrinkles. AbstractSynthesizing facial wrinkles has been tackled either by a long process of manual sculpting on 3D models, or using automatic methods that do not allow for user interaction or artistic expression. In this paper, we propose a method that accepts interactive sketchy drawings depicting wrinkle patterns, and synthesizes realistic looking wrinkles on faces. The method inherits the simplicity of sketching, making it possible for artists as well as novice users to generate realistic facial detail very efficiently, allowing fast preview for physical makeup, or aging simulations for fun and professional applications. All strokes are used to infer the wrinkles, retaining the expressiveness of the sketches and realism of the final result at the same time. This is achieved by designing novel multi-scale statistics tailored to the wrinkle geometry and coupled to the sketch interpretation method. The statistics capture the crosssectional profiles of wrinkles at different scales and parts of a face. The strokes are augmented with the statistics extracted from given example face models, and applied to an input face model interactively. The interface gives the user control over the shapes and scales of wrinkles via sketching while adding extra details required for realism automatically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.