2020
DOI: 10.1109/access.2020.2996642
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Document-Level Relation Extraction Method Based on BERT and Entity Information

Abstract: Document-level relation extraction aims to extract the relationship among the entities in a paragraph of text. Compared with sentence-level, the text in document-level relation extraction is much longer and contains many more entities. It makes the document-level relation extraction a harder task. The number and complexity of entities make it necessary to provide enough information about the entities for the models in document-level relation extraction. To solve this problem, we put forward a document-level en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(29 citation statements)
references
References 21 publications
2
27
0
Order By: Relevance
“…CNN YOLO is also used for object recognition and feature extraction in complex documents such as comics [111]. • BERT: Few studies [83], [13] reported the use of BERT for extracting features from the data. It is suitable for various types of language processing related task in NLP.…”
Section: ) Named Entity Recognition (Ner)mentioning
confidence: 99%
See 1 more Smart Citation
“…CNN YOLO is also used for object recognition and feature extraction in complex documents such as comics [111]. • BERT: Few studies [83], [13] reported the use of BERT for extracting features from the data. It is suitable for various types of language processing related task in NLP.…”
Section: ) Named Entity Recognition (Ner)mentioning
confidence: 99%
“…Fine-tuning BERT for NER performance improvement in financial and biomedical documents utilizing the combination of BERT and word embedding is discussed in the study [83]. Utilizing and fine-tuning the BERT model for achieving document-level relation extraction using DocRED-a large scale open-domain document-level relation extraction dataset shows improvement in F1 measure in [13].…”
Section: ) Named Entity Recognition (Ner)mentioning
confidence: 99%
“…BERT can easily deal with ambiguous languages. The study [18] proposed BERT for extracting relationships amongst entities at the document level from the DocRED dataset. Later, the BERT is fine-tuned with different parameters to improve the performance of the entity relation extraction task by 5% (F1 Score -47%) than BERT without fine-tuning ( F1 Score -42 %).…”
Section: Named Entity Recognition (Ner)mentioning
confidence: 99%
“…The latest development of pre-trained LMs relying on transformer architecture [19] has shown to capture semantic and syntactic features better [13], with [31] proving that pretrained LMs significantly improve the performance in text classification tasks, prevent overfitting, and increase sample efficiency. Moreover, works [32], [33] that fine-tune the pretrained LM models (most of them BERT [18]) have shown that simple NNs built on top of pretrained transformer-based models improve performance. Meanwhile, DISTRE model [11] extended GPT [13] to the DS setting by incorporating a multi-instance training mechanism, proving that pre-trained LMs provide a stronger signal for DS than specific linguistic and side-information features [8].…”
Section: Related Workmentioning
confidence: 99%