Fig. 1. Design control and perceptual reproduction of material translucency for full color 3D printing: (a) a printed anatomy model with opaque and translucent parts, revealing internal structures; (b) a polished transparent Lion with a red mane and 3D logo inside; (c) a transparent lizard with colored scales illuminated from below by a checkerboard pattern; (d) a close-up (see Figure 12 for the full view) of wax and a 3D print reproducing the wax's translucent appearance.We present an efficient and scalable pipeline for fabricating full-colored objects with spatially-varying translucency from practical and accessible input data via multi-material 3D printing. Observing that the costs associated with BSSRDF measurement and processing are high, the range of 3D printable BSSRDFs are severely limited, and that the human visual system relies only on simple high-level cues to perceive translucency, we propose a method based on reproducing perceptual translucency cues. The input to our pipeline is an RGBA signal defined on the surface of an object, making our approach accessible and practical for designers. We propose a framework for extending standard color management and profiling to combined color and translucency management using a gamut correspondence strategy we call opaque relative processing. We present an efficient streaming method to compute voxel-level material arrangements, achieving both realistic reproduction of measured translucent materials and artistic effects involving multiple fully or partially transparent geometries.
We present a novel approach to solve the problem of segmenting a sequence of animated objects into near-rigid components based on k given poses of the same non-rigid object. We model the segmentation problem as a clustering problem in dual space and find near-rigid segments with the property that segment boundaries are located at regions of large deformation. The presented approach is asymptotically faster than previous approaches that achieve the same property and does not require any user-specified parameters. However, if desired, the user may interactively change the number of segments. We demonstrate the practical value of our approach using experiments.
With systems for acquiring 3D surface data being evermore commonplace, it has become important to reliably extract specific shapes from the acquired data. In the presence of noise and occlusions, this can be done through the use of statistical shape models, which are learned from databases of clean examples of the shape in question. In this paper, we review, analyze and compare different statistical models: from those that analyze the variation in geometry globally to those that analyze the variation in geometry locally. We first review how different types of models have been used in the literature, then proceed to define the models and analyze them theoretically, in terms of both their statistical and computational aspects. We then perform extensive experimental comparison on the task of model fitting, and give intuition about which type of model is better for a few applications. Due to the wide availability of databases of high-quality data, we use the human face as the specific shape we wish to extract from corrupted data
We present a statistical model for 3D human faces in varying expression, which decomposes the surface of the face using a wavelet transform, and learns many localized, decorrelated multilinear models on the resulting coefficients. Using this model we are able to reconstruct faces from noisy and occluded 3D face scans, and facial motion sequences. Accurate reconstruction of face shape is important for applications such as tele-presence and gaming. The localized and multi-scale nature of our model allows for recovery of fine-scale detail while retaining robustness to severe noise and occlusion, and is computationally efficient and scalable. We validate these properties experimentally on challenging data in the form of static scans and motion sequences. We show that in comparison to a global multilinear model, our model better preserves fine detail and is computationally faster, while in comparison to a localized PCA model, our model better handles variation in expression, is faster, and allows us to fix identity parameters for a given subject.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.