Most traditional facial expression-recognition systems track facial components such as eyes, eyebrows, and mouth for feature extraction. Though some of these features can provide clues for expression recognition, other finer changes of the facial muscles can also be deployed for classifying various facial expressions. This study locates facial components by active shape model to extract seven dynamic face regions (frown, nose wrinkle, two nasolabial folds, two eyebrows, and mouth). Proposed semantic facial features could then be acquired using directional gradient operators like Gabor filters and Laplacian of Gaussian. A multi-class support vector machine (SVM) was trained to classify six facial expressions (neutral, happiness, surprise, anger, disgust, and fear). The popular Cohn-Kanade database was tested and the average recognition rate reached 94.7 %. Also, 20 persons were invited for on-line test and the recognition rate was about 93 % in a real-world environment. It demonstrated that the proposed semantic facial features could effectively represent changes between facial expressions. The time complexity could be lower than the other SVM based approaches due to the less number of deployed features.