2022
DOI: 10.48550/arxiv.2205.05368
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pre-trained Language Models as Re-Annotators

Abstract: Annotation noise are widespread in datasets, but manually revising a flawed corpus is time-consuming and error-prone. Hence, given the prior knowledge in Pre-trained Language Models and the expected uniformity across all annotations, we attempt to reduce annotation noise in the corpus through two tasks automatically: (1) Annotation Inconsistency Detection that indicates the credibility of annotations, and (2) Annotation Error Correction that rectifies the abnormal annotations. We investigate how to acquire sem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 112 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?