2021
DOI: 10.1007/s00521-021-05815-z
|View full text |Cite
|
Sign up to set email alerts
|

A joint model for entity and relation extraction based on BERT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 62 publications
(27 citation statements)
references
References 22 publications
0
27
0
Order By: Relevance
“…Gao et al [ 24 ] proposed a medical relationship extraction model based on BERT, which combined the whole sentence information obtained from the pretrained language model with the corresponding information of two medical entities to complete the relationship extraction task. Qiao et al [ 25 ] proposed an agricultural entity relationship extraction model based on BERT-BLSTM-LSTM, which can effectively extract the relationship between agricultural entities.…”
Section: Related Workmentioning
confidence: 99%
“…Gao et al [ 24 ] proposed a medical relationship extraction model based on BERT, which combined the whole sentence information obtained from the pretrained language model with the corresponding information of two medical entities to complete the relationship extraction task. Qiao et al [ 25 ] proposed an agricultural entity relationship extraction model based on BERT-BLSTM-LSTM, which can effectively extract the relationship between agricultural entities.…”
Section: Related Workmentioning
confidence: 99%
“…Based on the relationship extraction method of the feature vector, first build a feature vector for the original text, and then find the relationship between the entities through the statistical learning model [ 9 ]. Qiao proposed a relationship extraction method based on the maximum entropy model, which is integrated with the lexical, syntax, and semantic feature vectors [ 10 ]. Pagan added the word block information and Word-net information to the built-in feature vector and enhanced the text's syntax information and semantic information [ 11 ].…”
Section: Related Discussionmentioning
confidence: 99%
“…Bert is a huge pre-trained model that shows excellent performance when generating entity embeddings of text (Qiao, 2021). The Bert model improves the performance of text classification through entity embedding, so it can identify tweets and generate corresponding damage reports, finally we classify the damage reports into corresponding topics.…”
Section: Text Data Classification Modelmentioning
confidence: 99%