2019
DOI: 10.48550/arxiv.1909.11898
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fine-tune Bert for DocRED with Two-step Process

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 42 publications
(42 citation statements)
references
References 15 publications
0
42
0
Order By: Relevance
“…Effect of Language Models. To analyze the effect of language models, we conduct the following comparison experiments on different model settings: the two-step Bert-based document-level classifier (Wang et al, 2019) upon the constructed pseudo document, denoted as Bert-Two-Step; and our model (DocDS). The results are shown in Table 3 and Figure 2.…”
Section: Detailed Analysismentioning
confidence: 99%
“…Effect of Language Models. To analyze the effect of language models, we conduct the following comparison experiments on different model settings: the two-step Bert-based document-level classifier (Wang et al, 2019) upon the constructed pseudo document, denoted as Bert-Two-Step; and our model (DocDS). The results are shown in Table 3 and Figure 2.…”
Section: Detailed Analysismentioning
confidence: 99%
“…BERT-RE: It uses BERT to encode the document, entities are represented by their average word embedding. A BiLinear layer is applied to predict the relation between entity pairs [13].…”
Section: Comparison Models and Evaluation Metricsmentioning
confidence: 99%
“…BERT-Two-Step: Based on BERT-RE, it models the document-level RE through a two-step process. The first step is to predict whether or not two entities have a relation, the second step is to predict the specific relation [13].…”
Section: Comparison Models and Evaluation Metricsmentioning
confidence: 99%
See 2 more Smart Citations