The development of methods and tools for the generation of visually appealing motion sequences using prerecorded motion capture data has become an important research area in computer animation. In particular, data-driven approaches have been used for reconstructing high-dimensional motion sequences from low-dimensional control signals. In this article, we contribute to this strand of research by introducing a novel framework for generating full-body animations controlled by only four 3D accelerometers that are attached to the extremities of a human actor. Our approach relies on a knowledge base that consists of a large number of motion clips obtained from marker-based motion capturing. Based on the sparse accelerometer input a cross-domain retrieval procedure is applied to build up a lazy neighborhood graph in an online fashion. This graph structure points to suitable motion fragments in the knowledge base, which are then used in the reconstruction step. Supported by a kd-tree index structure, our procedure scales to even large datasets consisting of millions of frames. Our combined approach allows for reconstructing visually plausible continuous motion streams, even J. Tautges and T. Helten were financially supported by grants from Deutsche Forschungsgemeinschaft (WE 1945/5-1 and MU 2686/3-1).
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
Photo realistic visualization of a huge number of individual filaments like in the case of hair, fur, or knitwear is a challenging task: Explicit rendering approaches for simulating radiance transfer at a filament get totally impracticable with respect to rendering performance and it is also not obvious how to derive efficient scattering functions for different levels of (geometric) abstraction or how to deal with very complex scattering mechanisms. We present a novel uniform formalism for light scattering from filaments in terms of radiance, which we call the Bidirectional Fiber Scattering Distribution Function (BFSDF). We show that previous specialized approaches, which have been developed in the context of hair rendering, can be seen as instances of the BFSDF. Similar to the role of the BSSRDF for surface scattering functions, the BFSDF can be seen as a general approach for light scattering from filaments, which is suitable for deriving approximations in a canonic and systematic way. For the frequent cases of distant light sources and observers, we deduce an efficient far field approximation (Bidirectional Curve Scattering Distribution Function, BCSDF). We show that on the basis of the BFSDF, parameters for common rendering techniques can be estimated in a non-ad-hoc, but physically-based way.
Path tracing (7.8 hours)Offline dual scattering (5.2 minutes) Real-time dual scattering (14 fps)Using only single scattering (20 fps) Single scattering + diffuse (20 fps) Kajiya-Kay shading model (20 fps) Figure 1: Comparison of our method to path tracing and existing hair shading methods with deep opacity maps. Our dual scattering approximations (offline ray shooting and real-time GPU-based implementations) achieve close results to the path tracing reference without any parameter adjustment and with significantly improved rendering times. Using single scattering only fails to produce the correct hair color. Adding an ad-hoc diffuse component or Kajiya-Kay shading model also fails to achieve the same realism, even after hand tweaking the diffuse color and shadow opacity to match the reference. The hair model has 50K strands and 1.4M line segments. AbstractWhen rendering light colored hair, multiple fiber scattering is essential for the right perception of the overall hair color. In this context, we present a novel technique to efficiently approximate multiple fiber scattering for a full head of human hair or a similar fiber based geometry. In contrast to previous ad-hoc approaches, our method relies on the physically accurate concept of the Bidirectional Scat- * tering Distribution Functions and gives physically plausible results with no need for parameter tweaking. We show that complex scattering effects can be approximated very well by using aggressive simplifications based on this theoretical model. When compared to unbiased Monte-Carlo path tracing, our approximations preserve photo-realism in most settings but with rendering times at least twoorders of magnitude lower. Time and space complexity are much lower compared to photon mapping-based techniques and we can even achieve realistic results in real-time on a standard PC with consumer graphics hardware.
Path tracing (7.8 hours)Offline dual scattering (5.2 minutes) Real-time dual scattering (14 fps)Using only single scattering (20 fps) Single scattering + diffuse (20 fps) Kajiya-Kay shading model (20 fps) Figure 1: Comparison of our method to path tracing and existing hair shading methods with deep opacity maps. Our dual scattering approximations (offline ray shooting and real-time GPU-based implementations) achieve close results to the path tracing reference without any parameter adjustment and with significantly improved rendering times. Using single scattering only fails to produce the correct hair color. Adding an ad-hoc diffuse component or Kajiya-Kay shading model also fails to achieve the same realism, even after hand tweaking the diffuse color and shadow opacity to match the reference. The hair model has 50K strands and 1.4M line segments. AbstractWhen rendering light colored hair, multiple fiber scattering is essential for the right perception of the overall hair color. In this context, we present a novel technique to efficiently approximate multiple fiber scattering for a full head of human hair or a similar fiber based geometry. In contrast to previous ad-hoc approaches, our method relies on the physically accurate concept of the Bidirectional Scat- * tering Distribution Functions and gives physically plausible results with no need for parameter tweaking. We show that complex scattering effects can be approximated very well by using aggressive simplifications based on this theoretical model. When compared to unbiased Monte-Carlo path tracing, our approximations preserve photo-realism in most settings but with rendering times at least twoorders of magnitude lower. Time and space complexity are much lower compared to photon mapping-based techniques and we can even achieve realistic results in real-time on a standard PC with consumer graphics hardware.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.