Virtual character animation is a very important area of study. Research in this field has provided many different solutions and systems to develop 3D human platforms. These areas eventually branched into many areas of investigation to explore the creation of realistic virtual humans, including appearance, expressions, emotions, reasoning, communication and behaviour. Whilst many projects and methodologies have been developed, there was no common standard until the Movie Picture Experts Group (MPEG) proposed a broad specification for facial and body animation, defined as the MPEG-4 FBA (facial and body animation). The effect of this standard was the development of a number of MPEG-4 compliant character animation frameworks, which use its proposed parametric solution which permits to compare different methodologies used to create virtual faces. In this paper, we propose a comprehensive survey of state-of-art of MPEG-4 facial animation (FA) character animation frameworks, define common criteria to compare them and perform experiments to evaluate their functionalities, usability, compliance to MPEG-4, animation quality, performance and coarticulation techniques, support for embodied character ability, compatibility between them and model simplification solutions. In particular, we compare complete frameworks such as Greta and Xface to our framework Charisma, including some of the variations and existent subsystems present in these systems.
In this paper, we present a novel coarticulation and speech synchronization framework compliant with MPEG-4 facial animation. The system we have developed uses MPEG-4 facial animation standard and other development to enable the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework enables realtime model simplification using quadric-based surfaces. Our coarticulation approach provides realistic and high performance lip-sync animation, based on Cohen-Massaro's model of coarticulation adapted to MPEG-4 facial animation (FA) specification. The preliminary experiments show that the coarticulation technique we have developed gives overall good and promising results when compared to related techniques.
Level of detail (LoD) techniques provide efficient and established tools to represent 3D models, while these are well documented and applied with proven results there is not much research and proven results that offer a solution that is both compatible with MPEG-4 (Motion Picture Experts Group) facial animation standard, and which considers a human face mesh simplification. In this paper we present our work on an adaptive LoD algorithm and attempt to preserve the human face features and simplify animation streams when possible by analyzing the scene and the model. Our approach is to apply continuous LOD to a MPEG-4 compliant model using an extension of Garland's Quadric-Based Surface algorithm. This is achieved with a novel approach that attempts a selective model simplification that is both compatible with MPEG-4 and also could simplify animation streams.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.