This paper presents a methodology for emotion recognition from speech signals and textual information together to improve the confidence level of emotion classification by using the threshold fusion. Some of acoustic features are extracted from the speech signal to analyze the characteristics and behavior of speech. Support Vector Machines (SVMs) are used for recognition of the emotional states. In this approach textual analysis of all emotions and emotional contents are manually defined and labeled. Emotion intensity levels of all emotional content and emotional words are calculated. The absolute emotional state is predicted from the acoustic features and textual contents using threshold based fusion. Results obtained from proposed approach show that the accuracy of the combined system has been improved as compared to the two individual methodologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.