We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local principal component analysis (PCA)-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. We provide different editing paradigms, all designed from the analysis of the data set. Some use anthropometric measurements from the literature and others allow the user to control the dominant modes of variation extracted from the data set. Our part-based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation.The results show that our part-based 3DMM approach has excellent generative properties and allows the user intuitive local control.
Figure 1: Our approach transfers the animation setup from a rigged source character to target character meshes. Using a geometric correspondence, it retargets the skeleton and the skinning weights to animate the target static meshes.
AbstractWe present a general method for transferring skeletons and skinning weights between characters with distinct mesh topologies. Our pipeline takes as inputs a source character rig (consisting of a mesh, a transformation hierarchy of joints, and skinning weights) and a target character mesh. From these inputs, we compute joint locations and orientations that embed the source skeleton in the target mesh, as well as skinning weights to bind the target geometry to the new skeleton. Our method consists of two key steps. We first compute the geometric correspondence between source and target meshes using a semi-automatic method relying on a set of markers. The resulting geometric correspondence is then used to formulate attribute transfer as an energy minimization and filtering problem. We demonstrate our approach on a variety of source and target bipedal characters, varying in mesh topology and morphology. Several examples demonstrate that the target characters behave well when animated with either forward or inverse kinematics. Via these examples, we show that our method preserves subtle artistic variations; spatial relationships between geometry and joints, as well as skinning weight details, are accurately maintained. Our proposed pipeline opens up many exciting possibilities to quickly animate novel characters by reusing existing production assets.
Figure 1: Examples of realistic 4096 × 4096 resolution face textures and displacement maps generated from the chosen color values, face mesh, age, and gender.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.