This paper presents a survey of the research carried out to date in the area of computer-based deformable modelling. Due to their cross-disciplinary nature, deformable modelling techniques have been the subject of vigorous research over the past three decades and have found numerous applications in the fields of machine vision (image analysis, image segmentation, image matching, and motion tracking), visualisation (shape representation and data fitting), and computer graphics (shape modelling, simulation, and animation). Previous review papers have been field/application specific and have therefore been limited in their coverage of techniques. This survey focuses on general deformable models for computer-based modelling, which can be used for computer graphics, visualisation, and various image processing applications. The paper organizes the various approaches by technique and provides a description, critique, and overview of applications for each. Finally, the state of the art of deformable modelling is discussed, and areas of importance for future research are suggested.
Abstract. This paper is concerned with capturing the dynamics of facial expression. The dynamics of facial expression can be described as the intensity and timing of a facial expression and its formation. To achieve this we developed a technique that can accurately classify and differentiate between subtle and similar expressions, involving the lower face. This is achieved by using Local Linear Embedding (LLE) to reduce the dimensionality of the dataset and applying Support Vector Machines (SVMs) to classify expressions. We then extended this technique to estimate the dynamics of facial expression formation in terms of intensity and timing.
This paper details a procedure for generating a function which maps an image of a neutral face to one depicting a desired expression independent of age, sex, or skin colour. Facial expression synthesis is a growing and relatively new domain within computer vision. One of the fundamental problems when trying to produce accurate expression synthesis in previous approaches is the lack of a consistent method for measuring expression. This inhibits the generation of a universal mapping function. This paper advances this domain by the introduction of the Facial Expression Shape Model (FESM) and the Facial Expression Texture Model (FETM). These are statistical models of facial expression based on anatomical analysis of expression called the Facial Action Coding System (FACS). The FESM and the FETM allow for the generation of a universal mapping function. These models provide a robust means for upholding the rules of the FACS and are flexible enough to describe subjects that are not present during the training phase. We use these models in conjunction with several Artificial Neural Networks (ANN) to generate photo-realistic images of facial expressions. q
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.