Emotion perception is critical for behavior prediction. There are many ways to capture emotional states by observing the body and copying actions. Physiological markers such as electroencephalography (EEG) have gained popularity, as facial emotions may not always adequately convey true emotion. This study has two main aims. The first is to measure four emotion categories using deep learning architectures and EEG data. The second purpose is to increase the number of samples in the dataset. To this end, a novel data augmentation approach namely the Extreme Learning Machine Wavelet Auto Encoder (ELM-W-AE) is proposed for data augmentation. The proposed data augmentation approach is both simple and faster than the other synthetic data augmentation approaches. For deep architectures, large datasets are important for performance. For this reason, data multiplexing approaches with classical and synthetic methods have become popular recently. The proposed synthetic data augmentation is the ELM-W-AE because of its efficiency and detail reproduction. The ELM-AE structure uses wavelet activation functions such as Gaussian, GgW, Mexican, Meyer, Morlet, and Shannon. Deep convolutional architectures classify EEG signals as images. EEG waves are scalogram using Continuous Wavelet Transform (CWT). The ResNet18 architecture recognizes emotions. The proposed technique uses GAMEMO data collected during gameplay. Each of these states is represented in the GAMEMO data collection. The visual data set created from the signal was divided into two groups as 70% training and 30% testing. ResNet18 has been fine-tuned with augmented photos, training images only. It achieved 99.6% classification accuracy in tests. The proposed method is compared with the other approaches on the same dataset, and an approximately 22% performance improvement is achieved.