Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.142
|View full text |Cite
|
Sign up to set email alerts
|

TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task

Abstract: TACRED (Zhang et al., 2017) is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE). But, even with recent advances in unsupervised pretraining and knowledge enhanced neural RE, models still show a high error rate. In this paper, we investigate the questions: Have we reached a performance ceiling or is there still room for improvement? And how do crowd annotations, dataset, and models contribute to this error rate? To answer these questions, we first validate the most challeng… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
117
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 99 publications
(119 citation statements)
references
References 21 publications
2
117
0
Order By: Relevance
“…We further demonstrate that EAE's learned entity representations are better than the pre-trained embeddings used by Zhang et al (2019); at knowledge probing tasks and the TA-CRED relation extraction task (Zhang et al, 2017;Alt et al, 2020). We show that training EAE to focus on entities is better than imbuing a similarsized network with an unconstrained memory store, and explain how EAE can outperform much larger sequence models while only accessing a small proportion of its parameters at inference time.…”
Section: Introductionmentioning
confidence: 78%
See 1 more Smart Citation
“…We further demonstrate that EAE's learned entity representations are better than the pre-trained embeddings used by Zhang et al (2019); at knowledge probing tasks and the TA-CRED relation extraction task (Zhang et al, 2017;Alt et al, 2020). We show that training EAE to focus on entities is better than imbuing a similarsized network with an unconstrained memory store, and explain how EAE can outperform much larger sequence models while only accessing a small proportion of its parameters at inference time.…”
Section: Introductionmentioning
confidence: 78%
“…To determine whether this has a significant effect on a model's ability to model entity-entity relations, we evaluate EAE on TACRED (Zhang et al, 2017) dataset introduced by (Alt et al, 2020). 9 Table 7 shows that EAE outperforms KNOWBERT on the revised and weighted splits introduced by (Alt et al, 2020), although it slightly under-performs on the original setting. 10 This result indicates that EAE, without explicitly entity-entity attention, can capture relations between entities effectively.…”
Section: Comparison To Alternative Entity Representationsmentioning
confidence: 99%
“…: (Cohen et al, 2020;Wang et al, 2019;Peters et al, 2019) 2 ) including the creation of many annotated data sets (e.g. : (Zhang et al, 2017;Alt et al, 2020;Mesquita et al, 2019;Elsahar et al, 2019)). These tasks consider only the recognition of knowledge directly expressed in individual texts, whereas we seek to utilise the combined knowledge from both a collection of texts and a knowledge base, allowing implicit and automatic association between expressions in texts and knowledge base relations and inference of propositions not directly expressed in individual texts.…”
Section: Related Workmentioning
confidence: 99%
“…Additional details in the supplementary material.6 We chose to perform binary annotation, as we find it makes the annotation process faster and more accurate. As demonstrated byAlt et al (2020), multi-class relation labeling by crowd-workers lead to frequent annotation errors. We observed the same phenomena also with non-crowd workers.…”
mentioning
confidence: 99%