Extracting entities and relations, as a crucial part of many tasks in natural language processing, transforms the unstructured text information into structured information and provides corresponding data support for knowledge graph (KG) and knowledge vault (KV) construction. Nevertheless, the mainstream relation-extraction methods, the pipeline method and the joint method, ignore the dependency between the subject entity and the object entity. This work introduces a pre-trained BERT model and a dilated gated convolutional neural network (DGCNN) as an encoder to distinguish the long-range semantics representation from the input sequence. In addition, we propose a cross-attention neural network as a decoder to learn the importance of each subject word for each word of the input sequence. Experiments were undertaken with two extensive datasets, the New York Times Corpus (NYT) and WebNLG Corpus, and showed that our model performs significantly better than the CasRel model, outperforming the baseline by 1.9% and 0.7% absolute gain in terms of F1-score.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.