2019
DOI: 10.48550/arxiv.1906.06127
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DocRED: A Large-Scale Document-Level Relation Extraction Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(39 citation statements)
references
References 30 publications
0
39
0
Order By: Relevance
“…In order to evaluate the methodology on multiple datasets with similar relations, we chose a set of relations that appear in both the TACRED (Zhang et al, 2017) and DocRED (Yao et al, 2019) datasets with at least 50 development examples 3 .…”
Section: Problem Statement and Setupmentioning
confidence: 99%
See 3 more Smart Citations
“…In order to evaluate the methodology on multiple datasets with similar relations, we chose a set of relations that appear in both the TACRED (Zhang et al, 2017) and DocRED (Yao et al, 2019) datasets with at least 50 development examples 3 .…”
Section: Problem Statement and Setupmentioning
confidence: 99%
“…TACRED (Zhang et al, 2017), a large-scale multi-class relation extraction dataset built over newswire and web text. And Do-cRED (Yao et al, 2019), a dataset for document level RE, and similarly designed for multi-class prediction. Per our setup above, we changed the setting of both datasets to per relation binary classi-fication.…”
Section: Problem Statement and Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…Note that in this work we assume a set of labeled training examples are available, i.e., the ground truth annotations contain complementary supporting paragraphs. Recently there was a growing in such datasets (Yang et al, 2018;Yao et al, 2019), due to the increasing interest in model explainability. Also, such supervision signals can also be obtained with distant supervision.…”
Section: Training With Complementary Regularizationmentioning
confidence: 99%