We present here one of the first studies that attempt to differentiate between genuine and acted emotional expressions, using EEG data. We present the first EEG dataset (available here) with recordings of subjects with genuine and fake emotional expressions. We build our experimental paradigm for classification of smiles; genuine smiles, fake/acted smiles and neutral expression. We propose multiple methods to extract intrinsic features from three EEG emotional expressions; genuine, neutral, and fake/acted smile. We extracted EEG features using three time-frequency analysis methods: discrete wavelet transforms (DWT), empirical mode decomposition (EMD), and incorporating DWT into EMD (DWT-EMD) at three frequency bands. We then evaluated the proposed methods using several classifiers including, k-nearest neighbors (KNN), support vector machine (SVM), and artificial neural network (ANN). We carried out an experimental paradigm on 28-subjects underwent three types of emotional expressions, genuine, neutral and fake/acted. The results showed that incorporating DWT into EMD extracted more hidden features than sole DWT or sole EMD method. The power spectral feature extracted by DWT, EMD, and DWT-EMD showed different neural patterns across the three emotional expressions at all the frequency bands. We performed binary classification experiments and achieved acceptable accuracy reaching a maximum of 84% in all type of emotions, classifiers and bands using sole DWT or EMD. Meanwhile, a combination of DWT-EMD achieved the highest classification accuracy with ANN in classifying true emotional expressions from fake expressions in the alpha and beta bands with an average accuracy of 94.3% and 84.1%, respectively. Our results suggest combining DWT-EMD for future emotion studies and highlight the association of alpha and beta frequency bands with emotions.