Human-computer interaction under the cloud computing platform is very important, but the semantic gap will limit the performance of interaction. It is necessary to understand the semantic information in various scenarios. Relation classification (RC) is an import method to implement the description of semantic formalization. It aims at classifying a relation between two specified entities in a sentence. Existing RC models typically rely on supervised learning and distant supervision. Supervised learning requires large-scale supervised training datasets, which are not readily available. Distant supervision introduces noise, and many long-tail relations still suffer from data sparsity. Few-shot learning, which is widely used in image classification, is an effective method for overcoming data sparsity. In this paper, we apply few-shot learning to a relation classification task. However, not all instances contribute equally to the relation prototype in a text-based few-shot learning scenario, which can cause the prototype deviation problem. To address this problem, we propose context attention-based prototypical networks. We design context attention to highlight the crucial instances in the support set to generate a satisfactory prototype. Besides, we also explore the application of a recently popular pre-trained language model to few-shot relation classification tasks. The experimental results demonstrate that our model outperforms the state-of-the-art models and converges faster.
The process of connecting references in a text to appropriate entities in a knowledge graph is known as entity linking. The existing entity linking model learns local compatibility through content and global interdependencies through relevant knowledge graph for disambiguation. However, the local compatibility component of existing methods usually ignores the multi-angle interactions between mentions and candidate entities. In order to fully account for the bidirectional connection between the input document and the knowledge graph, we propose the Bidirectional Interaction Entity Linking (BI-INTEL). The correlation between mentions and candidate entities, as well as the applicability of mention context and candidate entity descriptions, are all taken into account in the local compatibility component. In the global interdependence component, our stacked random walk layers learn the global interdependence of the candidate entity to enhance the accuracy of entity linking. According to the experiments, our BI-INTEL performs 3% better on average than cutting-edge methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.