Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019) 2019
DOI: 10.18653/v1/d19-6501
|View full text |Cite
|
Sign up to set email alerts
|

Analysing Coreference in Transformer Outputs

Abstract: We analyse coreference phenomena in three neural machine translation systems trained with different data settings with or without access to explicit intra-and cross-sentential anaphoric information. We compare system performance on two different genres: news and TED talks. To do this, we manually annotate (the possibly incorrect) coreference chains in the MT outputs and evaluate the coreference chain translations. We define an error typology that aims to go further than pronoun translation adequacy and include… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Explicitation in Contemporary Works. Many of contemporary works with the term "explicitation" focus on discourse MT (Hoek et al, 2015;Webber et al, 2015) and mainly developed by Lapshinova-Koltunski et al (2019, 2020. However, despite the broad coverage of the term, the explicitation in these studies are limited to insertion of connectives or annotations of coreference in the target side.…”
Section: Related Workmentioning
confidence: 99%
“…Explicitation in Contemporary Works. Many of contemporary works with the term "explicitation" focus on discourse MT (Hoek et al, 2015;Webber et al, 2015) and mainly developed by Lapshinova-Koltunski et al (2019, 2020. However, despite the broad coverage of the term, the explicitation in these studies are limited to insertion of connectives or annotations of coreference in the target side.…”
Section: Related Workmentioning
confidence: 99%
“…In practice, context-aware models do not leverage target-side contexts struggle to maintain these kinds of coreference consistency Lapshinova-Koltunski et al, 2019) because of the asymmetric nature of grammatical components and data distributions. Results show that CorefCL can complement the limitation of sourceonly context-aware models.…”
Section: Systemmentioning
confidence: 99%