Significant efforts have been made in the past decade to humanize both the form and function of social robots to increase their acceptance among humans. To this end, social robots have recently been combined with brain-computer interface (BCI) systems in an attempt to give them an understanding of human mental states, particularly emotions. However, emotion recognition using BCIs poses several challenges, such as subjectivity of emotions, contextual dependency, and a lack of reliable neuro-metrics for real-time processing of emotions. Furthermore, the use of BCI systems introduces its own set of limitations, such as the bias-variance trade-off, dimensionality, and noise in the input data space. In this study, we sought to address some of these challenges by detecting human emotional states from EEG brain activity during human-robot interaction (HRI). EEG signals were collected from 10 participants who interacted with a Pepper robot that demonstrated either a positive or negative personality. Using emotion valence and arousal measures derived from frontal brain asymmetry (FBA), several machine learning models were trained to classify human's mental states in response to the robot personality. To improve classification accuracy, all proposed classifiers were subjected to a Global Optimization Model (GOM) based on feature selection and hyperparameter optimization techniques. The results showed that it is possible to classify a user's emotional responses to the robot's behavior from the EEG signals with an accuracy of up to 92%. The outcome of the current study contributes to the first level of the Theory of Mind (ToM) in Human-Robot Interaction, enabling robots to comprehend users' emotional responses and attribute mental states to them. Our work advances the field of social and assistive robotics by paving the way for the development of more empathetic and responsive HRI in the future.