Aspect-level sentiment analysis aims to identify the sentiment polarity of specific aspects appearing in a given sentence or review. The model based on graph structure uses a dependency tree to link the aspect word with its corresponding opinion word and achieves significant results. However, for some sentences with ambiguous syntactic structure, it is difficult for the dependency tree to accurately parse the dependencies, which introduces noise and degrades the performance of the model. Based on this, we propose a syntactic and semantic enhanced multi-layer graph attention network (SSEMGAT), which introduces constituent trees in syntactic features to compensate for dependent trees at the clause level, exploiting aspect-aware attention in semantic features to assign the attention weight of specific aspects between contexts. The enhanced syntactic and semantic features are then used to classify specific aspects of sentiment through a multi-layer graph attention network. Accuracy and Macro-F1 are used as evaluation indexes in the SemEval-2014 Task 4 Restaurant and Laptop dataset and the Twitter dataset to compare the proposed model with the baseline model and the latest model, achieving competitive results.
Aspect-Based Sentiment Analysis (ABSA) aims to predict the sentiment polarity of different aspects in a sentence or document, which is a fine-grained task of natural language processing. Most of the existing work focuses on the correlation between aspect sentiment polarity and local context. The important deep correlations between global context and aspect sentiment polarity have not received enough attention. Besides, there are few studies on Chinese ABSA tasks and multilingual ABSA tasks. Based on the local context focus mechanism, we propose a multilingual learning model based on the interactive learning of local and global context focus, namely LGCF. Compared with the existing models, this model can effectively learn the correlation between local context and aspect words and the correlation between global context and aspect words simultaneously. In addition, the model can effectively analyze comments in Chinese and English simultaneously. Experiments conducted on three Chinese benchmark datasets(Camera, Phone and Car) and six English benchmark datasets(Laptop, Restaurant14, Restaurant16, Twitter, Tshirt and Television) demonstrate that LGCF has achieved compelling performance and efficiency improvements compared with several existing state-of-the-art models. Moreover, the ablation experiment results also verify the effectiveness of each part in LGCF. INDEX TERMS aspect-based sentiment analysis; Chinese sentiment analysis; multilingual ABSA; Local and Global Context Focus
The TransE model plays a key role in dealing with data sparsity and promotes the development of knowledge graphs completion. However, TransE has some difficulties in dealing with oneto-many, many-to-one, many-to-many and transmission relationships. In order to solve this problem, this paper proposes a knowledge representation learning model based on hyperplane projection and relational attributes, namely TransH-RA. First of all, we introduce the idea of hyperplane projection based on the TransE model, this idea is inspired by TransH, which makes different entities have different roles in a specific relationship, thus reducing the constraints of TransE translation rules, and map the head entity h and tail entity t to the plane of special relation r; Secondly, considering that it is not easy to identify different similar entities, the neighborhood information of entities is added to learn the neighborhood of entities around different entities; Then, in order to further strengthen the ability to deal with complex relationships, attribute features of relationships are added and attribute knowledge is embedded; Eventually, during the training of the model, the probability method is chosen to replace the head and tail entities. Link prediction experiments are conducted on the public datasets FB15K and WN18, and the triple classification experiments on the datasets WN11, FB13 and FB15K are carried out to analyze and verify the effectiveness of the proposed method. The evaluation results show that our method achieves the better results on MeanRank, Hits@10 and ACC indicators compared with TransE and TransH.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.