Figure 1: Our algorithm builds a temporally consistent parameterization for lines extracted from an animated 3D scene. AbstractWe describe a method to parameterize lines generated from animated 3D models in the context of animated line drawings. Cartoons and mechanical illustrations are popular subjects of nonphotorealistic drawings and are often generated from 3D models. Adding texture to the lines, for instance to depict brush strokes or dashed lines, enables greater expressiveness, e.g. to distinguish between visible and hidden lines. However, dynamic visibility events and the evolving shape of the lines raise issues that have been only partially explored so far. In this paper, we assume that the entire 3D animation is known ahead of time, as is typically the case for feature animations and off-line rendering. At the core of our method is a geometric formulation of the problem as a parameterization of the space-time surface swept by a 2D line during the animation. First, we build this surface by extracting lines in each frame. We demonstrate our approach with silhouette lines. Then, we locate visibility events that would create discontinuities and propagate them through time. They decompose the surface into charts with a disc topology. We parameterize each chart via a least-squares approach that reflects the specific requirements of line drawing. This step results in a texture atlas of the space-time surface which defines the parameterization for each line. We show that by adjusting a few weights in the least-squares energy, the artist can obtain an artifact-free animated motion in a variety of typical non-photorealistic styles such as painterly strokes and technical line drawing.
We propose a novel framework for photometric stereo (PS) under low-light conditions using uncalibrated near-light illumination. It operates on free-form video sequences captured with a minimalistic and affordable setup. We address issues such as albedo variations, shadowing, perspective projections, and camera noise. Our method uses specular spheres detected with a perspective-correcting Hough transform to robustly triangulate light positions in the presence of outliers via a least-squares approach. Furthermore, we propose an iterative reweighting scheme in combination with an ℓ-norm minimizer to robustly solve the calibrated near-light PS problem. In contrast to other approaches, our framework reconstructs depth, albedo (relative to light source intensity), and normals simultaneously and is demonstrated on synthetic and real-world scenes.
Point‐based global illumination (PBGI) uses a dense point sampling of the scene's surfaces to approximate indirect light transport and is intensively used in 3D motion pictures and special effects. Each point caches the reflected light using a spherical function and is typically used in a subsequent rasterization process to compute color bleeding and ambient occlusion in an economic, noise‐free fashion. The entire point set is organized in a spatial tree structure which models the light transport hierarchically, enabling fast adaptive shading on receivers (e.g., unprojected pixels). One of the major limitations of PBGI is related to the size of this tree, which can quickly become too large to fit in memory for complex scenes. However, we observe that, just as with natural images, this point data set is extremely redundant. In this paper, we present a new method exploiting this redundancy by factorizing PBGI data over the tree nodes. In particular, we show that a k‐means clustering in the parameter space of the spherical functions allows to define a small number of representative nodes against which any new one can be classified. These representative functions, gathered in a pre‐process over a subset of the actual points, form a look‐up table which allows to substitute node's data by quantized integers in a streaming process, avoiding building the full tree before compressing it. Depending on the nodes' spherical function variance in the scene and the desired accuracy, our indexed PBGI representation achieves between one and two orders of magnitude compression of the nodes spherical functions, with negligible numerical and perceptual error in the final image. In the case of a binary tree with one surfel per leaf and no spherical functions in the leaves, this leads to compression rates ranging from 3× to 5× for the whole tree.
The Point-Based Global Illumination (PBGI) algorithm is composed of two major steps: a caching step and a multiview rasterization step. At caching time, a dense point-sampling of the scene is shaded and organized in a spatial hierarchy, with internal nodes approximating the radiance of their subtrees using spherical harmonics. At rasterization time, a microbuffer is instantiated at the unprojected position of each image pixel (receiver). Then, a view-adaptive level-of-detail of the scene is extracted in the form of a tree cut and rasterized in the receiver's microbuffer, solving for visibility using a local variant of the z-buffer. Finally, the pixel color is computed by convolving its filled microbuffer with the surface BRDF. This noise-free indirect lighting method is widely used in the industry and captures several critical lighting effects, including ambient occlusion, color bleeding, (indirect) soft-shadows and environment lighting. However, we observe a large redundancy in this algorithm, both in cuts and receivers'microbuffers, which stems from their relatively low resolution. In this paper, we propose an evolution of PBGI which exploits spatial coherence to reduce these redundant computations. Starting from a similarity-based variational clustering of the receivers, we compute a single tree cut and rasterize a single microbuffer for each cluster. This per-cluster microbuffer provides a faithful approximation of the incident radiance for distant nodes and is composited over a receiver-specific microbuffer rasterizing only the closest nodes of the cluster's cut. This factorized approach is easy to integrate in any existing PBGI implementation and offers a significant rendering speed-up for a negligible and controllable approximation error.
In the style of binary shading, shape and illumination are depicted using two colours, typically black and white, which form coherent lines and regions in the image. We formulate the problem of assigning colours in the rendered image as an energy minimization, computed using graph cut on the image grid. The terms of this energy come from two sources: appearance (shading) and geometry (depth and curvature). Our contributions are in the use of geometric information in determining colours, and how this information is incorporated into a graph cut approach. This optimization yields boundaries between black and white regions that tend towards being shorter and to run along geometric features like creases. We show a range of results, and demonstrate that this approach produces more coherent images than simpler approaches that make local decisions when assigning colours, or that do not use geometry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.