2022
DOI: 10.3390/math10081344
|View full text |Cite
|
Sign up to set email alerts
|

Contextual Semantic-Guided Entity-Centric GCN for Relation Extraction

Abstract: Relation extraction tasks aim to predict potential relations between entities in a target sentence. As entity mentions have ambiguity in sentences, some important contextual information can guide the semantic representation of entity mentions to improve the accuracy of relation extraction. However, most existing relation extraction models ignore the semantic guidance of contextual information to entity mentions and treat entity mentions in and the textual context of a sentence equally. This results in low-accu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Next, the vector represented by the prototypes is obtained by weighting all prototype embeddings according to the similarity between the prototype embeddings and the sentence embeddings using the attention mechanism [9]. Specifically, the sentence representation obtained by PCNN is used as the query vector, and the cosine similarity between the sentence and each relation prototype is calculated by Equation (3), and then normalized by the softmax function to obtain the attention distribution of the input sentence over each relation prototype, as shown in Equation (4) [10] [11]. The information is selectively extracted from the prototype embedding by weighted summation according to the attention distribution to obtain the sentence vector represented by P h , as shown in Equation (5).…”
Section: Enhanced With Relation Prototype and Entity Typementioning
confidence: 99%
“…Next, the vector represented by the prototypes is obtained by weighting all prototype embeddings according to the similarity between the prototype embeddings and the sentence embeddings using the attention mechanism [9]. Specifically, the sentence representation obtained by PCNN is used as the query vector, and the cosine similarity between the sentence and each relation prototype is calculated by Equation (3), and then normalized by the softmax function to obtain the attention distribution of the input sentence over each relation prototype, as shown in Equation (4) [10] [11]. The information is selectively extracted from the prototype embedding by weighted summation according to the attention distribution to obtain the sentence vector represented by P h , as shown in Equation (5).…”
Section: Enhanced With Relation Prototype and Entity Typementioning
confidence: 99%
“…In recent years, graph convolutional network (GCN) stand out for their ability to handle non-Euclidean distance structure data [11,15,16] and have been increasingly noticed by scholars and applied to extract potential spatialtemporal properties in trafc fows [17][18][19][20][21]. For example, the difusion convolutional recurrent neural network (DCRNN) proposed by Li et al [22] adopted a sequence-to-sequence (Seq2Seq) model to extract spatialtemporal dependencies from past trafc fows for trafc prediction.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed variant is named the Augmented Whale Optimization Algorithm (AWOA) and tests over two benchmark suits. Contribution [17] proposes a contextual semantic-guided entity-centric graph convolutional network (CEGCN) model that enables entity mentions to obtain semantic-guided contextual information for more accurate relational representations. This model develops a self-attention-enhanced neural network to concentrate on the importance and relevance of different words to obtain semantic-guided contextual information, employs a dependency tree with entities as global nodes, and adds virtual edges to construct an entity-centric logical adjacency matrix (ELAM).…”
mentioning
confidence: 99%