Few-shot relation extraction is one of the current research focuses. The key to this research is to fully extract the relation semantic information through very little training data. Intuitively, raising the relation semantics awareness in sentences can improve the efficiency of the model to extract relation features to alleviate the overfitting problem in few-shot learning. Therefore, we propose an enhanced relation semantic feature model based on prototype network to extract relations from few-shot texts. Firstly, we design a multi-level embedding encoder with position information and Transformer, which uses local information in the text to enhance the relation semantics representation. Secondly, the encoded relation features are fed into the novel prototype network, which designs a method that utilizes query prototype-level attention to guide the extraction of supporting prototypes, thereby enhancing the prototypes representation to better classify the relations in query sentences. Finally, through experimental comparison and discussion, we prove and analyze the effectiveness of the proposed multi-level embedding encoder, and prototype-level attention can enhance the stability of the model. Furthermore, our model has substantial improvements over baseline methods.
Few-shot knowledge graph completion (FKGC) tasks involve determining the authenticity of triple candidates using a small number of reference triples with a given relation. Intuitively, the expression of relation features contributes to the close correlation among the triples with the same relation. Therefore, the relation features are not comprehensive enough, leading to the fact that the triples cannot learn sufficient association information for the FKGC task. In this paper, an enhanced relation semantic representation model is constructed for associative reference triples from both aspects of external structure and internal semantics. On the one hand, as the structure around the triple is helpful to understand the relation semantics implicated in the triple, a graph convolution network with attention and relation features is proposed to obtain the graph structure features. Furthermore, the local structure information of triples can be used to learn the deep relation semantics. On the other hand, entity information can enhance the perception of the relation semantics in the triple. Afterward, in order to associate the triples with the same relation by enhanced relation semantics, a semantic mapping method is proposed, which uses shared merged variables to map the relation, entity, and graph structure features into the same embedding space. Finally, a prototype network based on attention convolution is established to extract the relation prototype representation, and then classify query triples to achieve the purpose of completing the knowledge graph. Through experimental verification, the proposed model achieves excellent performance on two datasets commonly used in the FKGC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.