2020
DOI: 10.3390/info11110528
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Enhanced Distantly Supervised Relation Extraction via Graph Attention Network

Abstract: Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore these noisy data, resulting in unacceptable accuracy. To mitigate this problem, we present a novel Distantly Supervised approach SEGRE (Semantic Enhanced Graph attention networks Rela… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Shi et al [30] propose an advanced graph neural network, which assigns higher weights to those direct neighbor words that contribute more to relation prediction through breadth exploration. Ouyang et al [31] use graph attention networks to encode syntactic features, which obtain the important semantic information of related words in each sentence. Phi et al [32] combine a bidirectional gated recurrent unit (BiGRU) model with a form of hierarchical attention that enhances the performance of the distantly supervised RE task.…”
Section: Related Workmentioning
confidence: 99%
“…Shi et al [30] propose an advanced graph neural network, which assigns higher weights to those direct neighbor words that contribute more to relation prediction through breadth exploration. Ouyang et al [31] use graph attention networks to encode syntactic features, which obtain the important semantic information of related words in each sentence. Phi et al [32] combine a bidirectional gated recurrent unit (BiGRU) model with a form of hierarchical attention that enhances the performance of the distantly supervised RE task.…”
Section: Related Workmentioning
confidence: 99%
“…Secondly, word-level methods usually assign weight to the words in the instance according to some criteria such as distance, without considering the role of words in the semantic expression of sentences [31,12,19,30]. As mentioned above, mutual information reflects the commonality between sentence, so mutual information can Tab.…”
Section: Introductionmentioning
confidence: 99%