State-of-the-art methods for relation extraction consider the sentential context by modeling the entire sentence. However, syntactic indicators, certain phrases or words like prepositions that are more informative than other words and may be beneficial for identifying semantic relations. Other approaches using fixed text triggers capture such information but ignore the lexical diversity. To leverage both syntactic indicators and sentential contexts, we propose an indicator-aware approach for relation extraction. Firstly, we extract syntactic indicators under the guidance of syntactic knowledge. Then we construct a neural network to incorporate both syntactic indicators and the entire sentences into better relation representations. By this way, the proposed model alleviates the impact of noisy information from entire sentences and breaks the limit of text triggers. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model significantly outperforms the state-of-the-art methods.
Relation classification is one of the most fundamental upstream tasks in natural language processing and information extraction. State-of-the-art approaches make use of various deep neural networks (DNNs) to extract higher-level features directly. They can easily access to accurate classification results by taking advantage of both local entity features and global sentential features. Recent works on relation classification devote efforts to modify these neural networks, but less attention has been paid to the feature design concerning syntax. However, from a linguistic perspective, syntactic features are essential for relation classification. In this article, we present a novel linguistically motivated approach that enhances relation classification by imposing additional syntactic constraints. We investigate to leverage syntactic skeletons along with the sentential contexts to identify hidden relation types. The syntactic skeletons are extracted under the guidance of prior syntax knowledge. During extraction, the input sentences are recursively decomposed into syntactically shorter and simpler chunks. Experimental results on the SemEval-2010 Task 8 benchmark show that incorporating syntactic skeletons into current DNN models enhances the task of relation classification. Our systems significantly surpass two strong baseline systems. One of the substantial advantages of our proposal is that this framework is extensible for most current DNN models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.