Emotion recognition based on electroencephalography (EEG) signals has garnered substantial attention in recent years and finds extensive applications in the domains of medicine and psychology. However, individual differences in EEG signals pose a challenge to accurate emotion recognition and limit the widespread adoption of such techniques. To address this issue, this study proposes a model that combines random forest weights (RFWs) and four-dimensional convolutional recurrent neural network (4DCRNN) to minimize individual differences and captures emotion-relevant information. By integrating, the proposed model aims to improve the accuracy and generalization capability of emotion recognition. To evaluate the performance of the proposed model, experiments were conducted using the DEAP and SEED datasets. The results demonstrate the effectiveness of the RFW-4DCRNN in emotion recognition. Specifically, the proposed model achieves mean accuracy of 94.98% and 94.21% for Subject-dependent recognition using the DEAP and SEED datasets, respectively. For Subject-independent emotion recognition, the model achieved mean accuracy of 81.70% and 91.12% using two datasets, respectively. The result highlights the capability of the RFW-4DCRNN to effectively recognize emotions and improves generalization performance. Overall, this study presents an approach to addressing individual differences in EEG-based emotion recognition. The RFW-4DCRNN demonstrates promising results in terms of accuracy and generalization capability, offering potential for the advancement and application of emotion recognition techniques.