Due to the complexity of generalizing and modeling the series of brain signals, detecting emotions in people with sensory disabilities still continues to be challenging. Hence, brain–computer interface technology was used to study the emotions and behavior of people based on brain signals. Emotion analysis is a widely used and robust data mining analysis method. It provides an excellent opportunity to monitor, evaluate, determine, and understand the sentiments of consumers with respect to a product or a service. Yet, a recognition model of emotions in people with visual disabilities has not been evaluated, even though previous studies have already proposed the classification of emotions in people with sensory disabilities using machine learning approaches. Therefore, this study introduces a new salp swarm algorithm with deep recurrent neural network-based textual emotion analysis (SSADRNN-TEA) technique for disabled persons. The major intention of the SSADRNN-TEA technique was to focus on the detection and classification of emotions that exist in social media content. In this work, the SSADRNN-TEA technique undergoes preprocessing to make the input data compatible with the latter stages of processing and BERT word embedding process is applied. Moreover, deep recurrent neural network (DRNN) model is exploited. Finally, SSA is exploited for the optimal adjustment of the DRNN hyperparameters. A widespread experiment is involved in simulating the real-time performance of the SSADRNN-TEA method. The experimental values revealed the improved performance of the SSADRNN-TEA technique in terms of several evaluation metrics.