Using our previously developed system, we investigated the influence of training data on the facial expression accuracy using the training data of "taro" for the intentional facial expressions of "angry," "sad," and "surprised," and the training data of respective pronunciation for the intentional facial expressions of "happy" and "neutral." Using the proposed method, the facial expressions were discriminable with average accuracy of 72.4% for "taro," "koji" and "tsubasa", for the three facial expressions of "happy," "neutral," and "other".