Introduction: Emotional state recognition is crucial for identifying emotions and providing valuable insights into detecting prolonged stress or negative emotions in individuals. In this study, we explore the feasibility of utilizing facial electromyography (fEMG) signals to accurately recognize the emotions and determine the optimal recording location.
Materials and methods: To investigate various emotions, we used the continuously annotated signals of emotion dataset, consisting of fEMG signals captured from three distinct muscle locations: zygomaticus major (zEMG), corrugator supercilii (cEMG), and trapezius (tEMG). These fEMG signals underwent analysis through feature extraction in the time, frequency, and time-frequency domains. We identified the optimal muscle location for recognizing emotions using different machine learning models, such as logistic regression (LR), support vector machine, and random forest (RF), and validated the results using a 10-fold cross-validation approach. Additionally, we identified the most influential features for distinguishing between the emotions using the RF feature ranking method.
Results: Our findings showed that we attained the highest average accuracy of 74.79% for emotion classification by utilizing 31 top-ranked features from the time, frequency, and time-frequency domains of three fEMG signals (zEMG, cEMG, and tEMG) with the RF classifier. Moreover, we achieved an average accuracy of 74.17% by utilizing the top 10 time-domain features extracted from the zEMG signals with the LR classifier. In summary, this study demonstrates promising outcomes in utilizing fEMG signals for efficient emotion recognition and presents an innovative approach to the field of affective computing.