In general, the recognition involves several steps as follows: data acquisition, pre-processing, segmentation, feature extraction and classification. Automatic facial expression recognition has become a crucial technology in the computer vision field and its applications including identification and security, Medicine and Monitoring. The facial expression recognition system requires an algorithmic pipeline that involves two main blocks: feature extraction and classification. A large experimental session must lead to the adequate algorithmic pipeline, notably for identifying the best methods for feature extraction and classification to achieve robust facial expression recognition with high accuracy. Thus, it is essential to analyse data using multiple methods of feature extraction and classification. In this paper, we propose an approach to automate the analysis of data by repeating tests made to tune and compare feature extraction and classification methods. We evaluate our proposed data analysis approach using video sequences with fundamental emotion states: neutral expression, disgust, fear, happiness, sadness, anger and surprise. To transform the face images into vectors of features, we use shape, texture, and contour descriptors. This enables storing images in a table of vectors. Each table related to every descriptor is analysed with Five classifiers have been used, which are support vector machine, linear discriminant analysis, k-nearest neighbors, naïve Bayes, and binary tree classifiers. The techniques 10-fold and Leave-One-Out Cross-validations and the grid search have been used to tune the hyperparameters methods and compare them, computing the average recognition rate F-measure as evaluation metric. Experimentation on ChonKanade Image database (CK+) show that the proposed data analysis approach can find out the optimal combination to separate the data classes and identify the expression with a an F-score of 96.44%.