We present a novel approach for interactively and intuitively editing 3D facial animation in this paper. It determines a new expression by combining the user-specified constraints with the priors contained in a pre-recorded facial expression set, which effectively overcomes the generation of an unnatural expression caused by only user-constraints. The approach is based on the framework of example-based linear interpolation. It adaptively segments the face model into soft regions based on user-interaction. In dependently modeling each region, we propose a new function to estimate the blending weight of each example that matches the user-constraints as well as the spatio-temporal properties of the face set. In blending the regions into a single expression, we present a new criterion that fully exploits the spatial proximity and the spatio-temporal motion consistency over the face set to measure the coherency between vertices and use the coherency to reasonably propagate the influence of each region to the entire face model. Experiments show that our approach, even with inappropriate user's edits, can create a natural expression that optimally satisfies the user-desired goal.