SUMMARYWe aim to synthesize individual facial images with expression based on muscular contraction parameters. We have proposed a method of calculating the muscular contraction parameters from an arbitrary face image without using learning for each individual. As a result, we were able to generate not only individual facial expressions, but also the facial expressions of various persons. In this paper, we propose a muscle-based facial model, in which the facial muscles defined are both linear muscles and sphincter muscles. Additionally, we propose a method of synthesizing individual facial images with expression based on muscular contraction parameters. First, an individual facial model with expression is generated by fitting using an arbitrary face image. Next, the muscular contraction parameters that correspond to the expression displacement of the input face image are calculated. Finally, the facial expression is synthesized by the vertex displacements of a neutral facial model based on the calculated muscular contraction parameters. Experimental results reveal that the newly included sphincter muscles make it possible to synthesize facial expressions of the facial image which correspond to the actual face image with arbitrary expressions of the mouth or eyes.