Incorporating prior knowledge into recurrent neural network (RNN) is of great importance for many natural language processing tasks. However, most of the prior knowledge is in the form of structured knowledge and is difficult to be exploited in the existing RNN framework. By extracting the logic rules from the structured knowledge and embedding the extracted logic rule into the RNN, this paper proposes an effective framework to incorporate the prior information in the RNN models. First, we demonstrate that commonly used prior knowledge could be decomposed into a set of logic rules, including the knowledge graph, social graph, and syntactic dependence. Second, we present a technique to embed a set of logic rules into the RNN by the way of feedback masks. Finally, we apply the proposed approach to the sentiment classification and named entity recognition task. The extensive experimental results verify the effectiveness of the embedding approach. The encouraging results suggest that the proposed approach has the potential for applications in other NLP tasks.
Abstract. Weibo emotion recognition is one of the main tasks of the study of social public opinion. BI-LSTM, as a derivative model of RNN, has been widely used in the task of text emotion analysis. However, existing models do not make good use of prior information sun as emotion words and emoji, and we always capture the different keywords in order to gain a different understanding of the text. Therefore, this paper proposes a Multi-view and Attention-Based BI-LSTM method for weibo emotion recognition. Firstly, we use the emotion ontology lexicon and weibo emotions to label each word in a sentence. Secondly, we use these labels as attention information, combine the attention mechanism and BI-LSTM to get the sentiment words perspective and emoticon perspective. Finally, the output of the original semantic perspective of BI-LSTM is fused with the output of the above two perspectives to enhance the performance of the classification algorithm. Experiments show that the proposed method has a 6% increase in macro-average F1 score and an increase of 8% in micro-average F1 score compared with a AVE-BI-LSTM output in the task of weibo emotion recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.