Multimodal emotion recognition and analysis is considered a developing research field. Improving the multimodal fusion mechanism plays a key role in the more detailed recognition of the recognized emotion. The present study aimed to optimize the performance of the emotion recognition system and presented a model for multimodal emotion recognition from audio, text, and video data. First, the data were fused as a combination of video and audio, then as a combination of audio and text as binary, and finally the results were fused together. The final output included audio, text, and video data taking common features into account. Then, the convolutional neural network, as well as long-term and short-term memory (CNN-LSTM), were used to extract audio. Next, the Inception-Res Net-v2 network was applied for extracting the facial expression in the video. The output fused data were utilized by LSTM as the input of the softmax classifier to recognize the emotion of audio and video features fusion. In addition, the CNN-LSTM was combined in the form of a binary channel for learning audio emotion features. Meanwhile, a Bi-LSTM network was used to extract the text features and softmax was used for classifying the fused features. Finally, the generated results were fused together for the final classification, and the logistic regression model was used for fusion and classification. The results indicated that the recognition accuracy of the proposed method in the IEMOCAP data set was 82.9.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.