Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) 2022
DOI: 10.18653/v1/2022.gem-1.33
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Recent Error Annotation Schemes for Automatically Generated Text

Abstract: While automatically computing numerical scores remains the dominant paradigm in NLP system evaluation, error annotation and analysis is receiving increasing attention, with several error annotation schemes recently proposed for automatically generated text. However, there is little agreement about what error annotation schemes should look like, how many different types of errors should be distinguished and at what level of granularity. In this paper, our aim is to map out work on annotating errors in human and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…Multi-Language Support. Fine-grained evaluation has seen almost exclusive attention to English tasks (Huidrom and Belz, 2022). To smoothen the deployment barrier for multilingual fine-grained evaluation, all interface elements can be overridden to suit any language.…”
Section: Additional Featuresmentioning
confidence: 99%
“…Multi-Language Support. Fine-grained evaluation has seen almost exclusive attention to English tasks (Huidrom and Belz, 2022). To smoothen the deployment barrier for multilingual fine-grained evaluation, all interface elements can be overridden to suit any language.…”
Section: Additional Featuresmentioning
confidence: 99%