Proceedings of the 3rd Workshop on Neural Generation and Translation 2019
DOI: 10.18653/v1/d19-5601
|View full text |Cite
|
Sign up to set email alerts
|

Findings of the Third Workshop on Neural Generation and Translation

Abstract: This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019). First, we summarize the research trends of papers presented in the proceedings. Second, we describe the results of the two shared tasks 1) efficient neural machine translation (NMT) where participants were tasked with creating NMT systems that are both accurate and efficient, and 2) document-level genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(28 citation statements)
references
References 23 publications
1
26
0
1
Order By: Relevance
“…This paper describes the submissions of the "Marian" team to the Workshop on Neural Generation and Translation (WNGT 2019) efficiency shared task (Hayashi et al, 2019). The goal of the task is to build NMT systems on CPUs and GPUs placed on the Pareto Frontier of efficiency and accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…This paper describes the submissions of the "Marian" team to the Workshop on Neural Generation and Translation (WNGT 2019) efficiency shared task (Hayashi et al, 2019). The goal of the task is to build NMT systems on CPUs and GPUs placed on the Pareto Frontier of efficiency and accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…The efficiency task complements machine translation quality evaluation campaigns by also measuring and optimizing the computational cost of inference. This is the third edition of the task, updating and building upon the second edition of the task (Hayashi et al, 2019).…”
Section: Efficiency Taskmentioning
confidence: 99%
“…We use similar templates to generate a rough summary that is used as input in our rewrite model. Table 4: Generation results of our submitted systems as reported by the shared task organizers (Hayashi et al, 2019). RG: Relation Generation precision, CS: Content Selection (precision/recall), CO: Content Ordering.…”
Section: Generation With Pretrained Lmmentioning
confidence: 99%